Sample records for complex source process

  1. Locating the source of diffusion in complex networks by time-reversal backward spreading.

    PubMed

    Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene

    2016-03-01

    Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.

  2. Locating the source of diffusion in complex networks by time-reversal backward spreading

    NASA Astrophysics Data System (ADS)

    Shen, Zhesi; Cao, Shinan; Wang, Wen-Xu; Di, Zengru; Stanley, H. Eugene

    2016-03-01

    Locating the source that triggers a dynamical process is a fundamental but challenging problem in complex networks, ranging from epidemic spreading in society and on the Internet to cancer metastasis in the human body. An accurate localization of the source is inherently limited by our ability to simultaneously access the information of all nodes in a large-scale complex network. This thus raises two critical questions: how do we locate the source from incomplete information and can we achieve full localization of sources at any possible location from a given set of observable nodes. Here we develop a time-reversal backward spreading algorithm to locate the source of a diffusion-like process efficiently and propose a general locatability condition. We test the algorithm by employing epidemic spreading and consensus dynamics as typical dynamical processes and apply it to the H1N1 pandemic in China. We find that the sources can be precisely located in arbitrary networks insofar as the locatability condition is assured. Our tools greatly improve our ability to locate the source of diffusion in complex networks based on limited accessibility of nodal information. Moreover, they have implications for controlling a variety of dynamical processes taking place on complex networks, such as inhibiting epidemics, slowing the spread of rumors, pollution control, and environmental protection.

  3. Extension of optical lithography by mask-litho integration with computational lithography

    NASA Astrophysics Data System (ADS)

    Takigawa, T.; Gronlund, K.; Wiley, J.

    2010-05-01

    Wafer lithography process windows can be enlarged by using source mask co-optimization (SMO). Recently, SMO including freeform wafer scanner illumination sources has been developed. Freeform sources are generated by a programmable illumination system using a micro-mirror array or by custom Diffractive Optical Elements (DOE). The combination of freeform sources and complex masks generated by SMO show increased wafer lithography process window and reduced MEEF. Full-chip mask optimization using source optimized by SMO can generate complex masks with small variable feature size sub-resolution assist features (SRAF). These complex masks create challenges for accurate mask pattern writing and low false-defect inspection. The accuracy of the small variable-sized mask SRAF patterns is degraded by short range mask process proximity effects. To address the accuracy needed for these complex masks, we developed a highly accurate mask process correction (MPC) capability. It is also difficult to achieve low false-defect inspections of complex masks with conventional mask defect inspection systems. A printability check system, Mask Lithography Manufacturability Check (M-LMC), is developed and integrated with 199-nm high NA inspection system, NPI. M-LMC successfully identifies printable defects from all of the masses of raw defect images collected during the inspection of a complex mask. Long range mask CD uniformity errors are compensated by scanner dose control. A mask CD uniformity error map obtained by mask metrology system is used as input data to the scanner. Using this method, wafer CD uniformity is improved. As reviewed above, mask-litho integration technology with computational lithography is becoming increasingly important.

  4. Information entropy to measure the spatial and temporal complexity of solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Li, Weiyao; Huang, Guanhua; Xiong, Yunwu

    2016-04-01

    The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and solute transport complexity weakened, and the corresponding information entropy also decreased. Longitudinal macro dispersivity declined slightly at early time then rose. Solute spatial and temporal distribution had significant impacts on the information entropy. Information entropy could reflect the change of solute distribution. Information entropy appears a tool to characterize the spatial and temporal complexity of solute migration and provides a reference for future research.

  5. Testing earthquake source inversion methodologies

    USGS Publications Warehouse

    Page, M.; Mai, P.M.; Schorlemmer, D.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  6. Carbohydrates

    MedlinePlus

    ... include sugars added during food processing and refining. Complex carbohydrates include whole grain breads and cereals, starchy vegetables and legumes. Many of the complex carbohydrates are good sources of fiber. For a healthy ...

  7. Strategy for Texture Management in Metals Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.

    Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less

  8. Strategy for Texture Management in Metals Additive Manufacturing

    DOE PAGES

    Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.; ...

    2017-01-31

    Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less

  9. Mitochondrial respiratory chain complexes as sources and targets of thiol-based redox-regulation.

    PubMed

    Dröse, Stefan; Brandt, Ulrich; Wittig, Ilka

    2014-08-01

    The respiratory chain of the inner mitochondrial membrane is a unique assembly of protein complexes that transfers the electrons of reducing equivalents extracted from foodstuff to molecular oxygen to generate a proton-motive force as the primary energy source for cellular ATP-synthesis. Recent evidence indicates that redox reactions are also involved in regulating mitochondrial function via redox-modification of specific cysteine-thiol groups in subunits of respiratory chain complexes. Vice versa the generation of reactive oxygen species (ROS) by respiratory chain complexes may have an impact on the mitochondrial redox balance through reversible and irreversible thiol-modification of specific target proteins involved in redox signaling, but also pathophysiological processes. Recent evidence indicates that thiol-based redox regulation of the respiratory chain activity and especially S-nitrosylation of complex I could be a strategy to prevent elevated ROS production, oxidative damage and tissue necrosis during ischemia-reperfusion injury. This review focuses on the thiol-based redox processes involving the respiratory chain as a source as well as a target, including a general overview on mitochondria as highly compartmentalized redox organelles and on methods to investigate the redox state of mitochondrial proteins. This article is part of a Special Issue entitled: Thiol-Based Redox Processes. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. 40 CFR 432.21 - Special definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... extensive processing of the by-products of meat slaughtering. A complex slaughterhouse would usually include... STANDARDS MEAT AND POULTRY PRODUCTS POINT SOURCE CATEGORY Complex Slaughterhouses § 432.21 Special...

  11. Chicken feather hydrolysate as an inexpensive complex nitrogen source for PHA production by Cupriavidus necator on waste frying oils.

    PubMed

    Benesova, P; Kucera, D; Marova, I; Obruca, S

    2017-08-01

    The chicken feather hydrolysate (FH) has been tested as a potential complex nitrogen source for the production of polyhydroxyalkanoates by Cupriavidus necator H16 when waste frying oil was used as a carbon source. The addition of FH into the mineral salt media with decreased inorganic nitrogen source concentration improved the yields of biomass and polyhydrohyalkanoates. The highest yields were achieved when 10 vol.% of FH prepared by microwave-assisted alkaline hydrolysis of 60 g l -1 feather was added. In this case, the poly(3-hydroxybutyrate) (PHB) yields were improved by more than about 50% as compared with control cultivation. A positive impact of FH was also observed for accumulation of copolymer poly(3-hydroxybutyrate-co-3-hydroxyvalerate) when sodium propionate was used as a precursor. The copolymer has superior processing and mechanical properties in comparison with PHB homopolymer. The application of FH eliminated the inhibitory effect of propionate and resulted in altered content of 3-hydroxyvalerate (3HV) in copolymer. Therefore, the hydrolysed feather can serve as an excellent complex source of nitrogen for the polyhydroxyalkanoates (PHA) production. Moreover, by the combination of two inexpensive types of waste, such as waste frying oil and feather hydrolysate, it is possible to produce PHA with substantially improved efficiency and sustainability. Millions of tons of feathers, important waste product of poultry-processing industry, are disposed off annually without any further benefits. Thus, there is an inevitable need for new technologies that enable ecologically and economically sensible processing of this waste. Herein, we report that alkali-hydrolysed feathers can be used as a complex nitrogen source considerably improving polyhydroxyalkanoates production on waste frying oil employing Cupriavidus necator. © 2017 The Society for Applied Microbiology.

  12. Data reduction complex analog-to-digital data processing requirements for onsite test facilities

    NASA Technical Reports Server (NTRS)

    Debbrecht, J. D.

    1976-01-01

    The analog to digital processing requirements of onsite test facilities are described. The source and medium of all input data to the Data Reduction Complex (DRC) and the destination and medium of all output products of the analog-to-digital processing are identified. Additionally, preliminary input and output data formats are presented along with the planned use of the output products.

  13. Effects of different nitrogen sources on the biogas production - a lab-scale investigation.

    PubMed

    Wagner, Andreas Otto; Hohlbrugger, Peter; Lins, Philipp; Illmer, Paul

    2012-12-20

    For anaerobic digestion processes nitrogen sources are poorly investigated although they are known as possible process limiting factors (in the hydrolysis phase) but also as a source for fermentations for subsequent methane production by methanogenic archaea. In the present study different complex and defined nitrogen sources were investigated in a lab-scale experiment in order to study their potential to build up methane. The outcome of the study can be summarised as follows: from complex nitrogen sources yeast extract and casamino acids showed the highest methane production with approximately 600 ml methane per mole of nitrogen, whereas by the use of skim milk no methane production could be observed. From defined nitrogen sources L-arginine showed the highest methane production with almost 1400 ml methane per mole of nitrogen. Moreover it could be demonstrated that the carbon content and therefore C/N-ratio has only minor influence for the methane production from the used substrates. Copyright © 2011 Elsevier GmbH. All rights reserved.

  14. Reliable Real-time Calculation of Heart-rate Complexity in Critically Ill Patients Using Multiple Noisy Waveform Sources

    DTIC Science & Technology

    2014-01-01

    systems Machine learning Automatic data processing 1 Introduction Heart-rate complexity (HRC) is a method of quantifying the amount of complex...5. Batchinsky AI, Skinner J, Necsoiu C, et al. New measures of heart-rate complexity: effect of chest trauma and hemorrhage. J Trauma. 2010;68:1178–85

  15. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho

    2015-01-01

    Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  16. The Sun: Source of the Earth's Energy

    NASA Technical Reports Server (NTRS)

    Thompson, Barbara J.; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    The Sun is the primary source of the Earth's energy. However, due to the complexity in the way the energy affects Earth, the various solar sources of the energy, and the variation exhibited by the Sun it is difficult to understand and predict the Earth's response to solar drivers. In addition to visible light the radiant energy of the Sun can exhibit variation in nearly all wavelengths, which can vary over nearly all timescales. Depending on the wavelength of the incident radiation the light can deposit energy in a wide variety or locations and drive processes from below Earth's surface to interplanetary space. Other sources of energy impacting Earth include energetic particles, magnetic fields, and mass and flow variations in the solar wind. Many of these variable energetic processes cannot be coupled and recent results continue to demonstrate that the complex dynamics of the Sun can have a great range of measurable impacts on Earth.

  17. Variations in recollection: the effects of complexity on source recognition.

    PubMed

    Parks, Colleen M; Murray, Linda J; Elfman, Kane; Yonelinas, Andrew P

    2011-07-01

    Whether recollection is a threshold or signal detection process is highly controversial, and the controversy has centered in part on the shape of receiver operating characteristics (ROCs) and z-transformed ROCs (zROCs). U-shaped zROCs observed in tests thought to rely heavily on recollection, such as source memory tests, have provided evidence in favor of the threshold assumption, but zROCs are not always as U-shaped as threshold theory predicts. Source zROCs have been shown to become more linear when the contribution of familiarity to source discriminations is increased, and this may account for the existing results. However, another way in which source zROCs may become more linear is if the recollection threshold begins to break down and recollection becomes more graded and Gaussian. We tested the "graded recollection" account in the current study. We found that increasing stimulus complexity (i.e., changing from single words to sentences) or increasing source complexity (i.e., changing the sources from audio to videos of speakers) resulted in flatter source zROCs. In addition, conditions expected to reduce recollection (i.e., divided attention and amnesia) had comparable effects on source memory in simple and complex conditions, suggesting that differences between simple and complex conditions were due to differences in the nature of recollection, rather than differences in the utility of familiarity. The results suggest that under conditions of high complexity, recollection can appear more graded, and it can produce curved ROCs. The results have implications for measurement models and for current theories of recognition memory.

  18. Acoustic Source Localization via Time Difference of Arrival Estimation for Distributed Sensor Networks Using Tera-Scale Optical Core Devices

    DOE PAGES

    Imam, Neena; Barhen, Jacob

    2009-01-01

    For real-time acoustic source localization applications, one of the primary challenges is the considerable growth in computational complexity associated with the emergence of ever larger, active or passive, distributed sensor networks. These sensors rely heavily on battery-operated system components to achieve highly functional automation in signal and information processing. In order to keep communication requirements minimal, it is desirable to perform as much processing on the receiver platforms as possible. However, the complexity of the calculations needed to achieve accurate source localization increases dramatically with the size of sensor arrays, resulting in substantial growth of computational requirements that cannot bemore » readily met with standard hardware. One option to meet this challenge builds upon the emergence of digital optical-core devices. The objective of this work was to explore the implementation of key building block algorithms used in underwater source localization on the optical-core digital processing platform recently introduced by Lenslet Inc. This demonstration of considerably faster signal processing capability should be of substantial significance to the design and innovation of future generations of distributed sensor networks.« less

  19. Acoustic emission source location in complex structures using full automatic delta T mapping technique

    NASA Astrophysics Data System (ADS)

    Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys

    2016-05-01

    An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.

  20. The Simple Concurrent Online Processing System (SCOPS) - An open-source interface for remotely sensed data processing

    NASA Astrophysics Data System (ADS)

    Warren, M. A.; Goult, S.; Clewley, D.

    2018-06-01

    Advances in technology allow remotely sensed data to be acquired with increasingly higher spatial and spectral resolutions. These data may then be used to influence government decision making and solve a number of research and application driven questions. However, such large volumes of data can be difficult to handle on a single personal computer or on older machines with slower components. Often the software required to process data is varied and can be highly technical and too advanced for the novice user to fully understand. This paper describes an open-source tool, the Simple Concurrent Online Processing System (SCOPS), which forms part of an airborne hyperspectral data processing chain that allows users accessing the tool over a web interface to submit jobs and process data remotely. It is demonstrated using Natural Environment Research Council Airborne Research Facility (NERC-ARF) instruments together with other free- and open-source tools to take radiometrically corrected data from sensor geometry into geocorrected form and to generate simple or complex band ratio products. The final processed data products are acquired via an HTTP download. SCOPS can cut data processing times and introduce complex processing software to novice users by distributing jobs across a network using a simple to use web interface.

  1. Global high-frequency source imaging accounting for complexity in Green's functions

    NASA Astrophysics Data System (ADS)

    Lambert, V.; Zhan, Z.

    2017-12-01

    The general characterization of earthquake source processes at long periods has seen great success via seismic finite fault inversion/modeling. Complementary techniques, such as seismic back-projection, extend the capabilities of source imaging to higher frequencies and reveal finer details of the rupture process. However, such high frequency methods are limited by the implicit assumption of simple Green's functions, which restricts the use of global arrays and introduces artifacts (e.g., sweeping effects, depth/water phases) that require careful attention. This motivates the implementation of an imaging technique that considers the potential complexity of Green's functions at high frequencies. We propose an alternative inversion approach based on the modest assumption that the path effects contributing to signals within high-coherency subarrays share a similar form. Under this assumption, we develop a method that can combine multiple high-coherency subarrays to invert for a sparse set of subevents. By accounting for potential variability in the Green's functions among subarrays, our method allows for the utilization of heterogeneous global networks for robust high resolution imaging of the complex rupture process. The approach also provides a consistent framework for examining frequency-dependent radiation across a broad frequency spectrum.

  2. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf

    2017-09-01

    There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.

  3. Multiple-predators-based capture process on complex networks

    NASA Astrophysics Data System (ADS)

    Ramiz Sharafat, Rajput; Pu, Cunlai; Li, Jie; Chen, Rongbin; Xu, Zhongqi

    2017-03-01

    The predator/prey (capture) problem is a prototype of many network-related applications. We study the capture process on complex networks by considering multiple predators from multiple sources. In our model, some lions start from multiple sources simultaneously to capture the lamb by biased random walks, which are controlled with a free parameter $\\alpha$. We derive the distribution of the lamb's lifetime and the expected lifetime $\\left\\langle T\\right\\rangle $. Through simulation, we find that the expected lifetime drops substantially with the increasing number of lions. We also study how the underlying topological structure affects the capture process, and obtain that locating on small-degree nodes is better than large-degree nodes to prolong the lifetime of the lamb. Moreover, dense or homogeneous network structures are against the survival of the lamb.

  4. Free and Open Source GIS Tools: Role and Relevance in the Environmental Assessment Community

    EPA Science Inventory

    The presence of an explicit geographical context in most environmental decisions can complicate assessment and selection of management options. These decisions typically involve numerous data sources, complex environmental and ecological processes and their associated models, ris...

  5. Part 3 Specialized aspects of GIS and spatial analysis . Garage band science and dynamic spatial models

    NASA Astrophysics Data System (ADS)

    Box, Paul W.

    GIS and spatial analysis is suited mainly for static pictures of the landscape, but many of the processes that need exploring are dynamic in nature. Dynamic processes can be complex when put in a spatial context; our ability to study such processes will probably come with advances in understanding complex systems in general. Cellular automata and agent-based models are two prime candidates for exploring complex spatial systems, but are difficult to implement. Innovative tools that help build complex simulations will create larger user communities, who will probably find novel solutions for understanding complexity. A significant source for such innovations is likely to be from the collective efforts of hobbyists and part-time programmers, who have been dubbed ``garage-band scientists'' in the popular press.

  6. Health, safety, and environmental risk assessment of steel production complex in central Iran using TOPSIS.

    PubMed

    Jozi, S A; Majd, N Moradi

    2014-10-01

    This research was carried out with the aim of presenting an environmental management plan for steel production complex (SPC) in central Iran. Following precise identification of the plant activities as well as the study area, possible sources of environmental pollution and adverse impacts on the air quality, water, soil, biological environment, socioeconomic and cultural environment, and health and safety of the employees were determined considering the work processes of the steel complex. Afterwards, noise, wastewater, and air pollution sources were measured. Subsequently, factors polluting the steel complex were identified by TOPSIS and then prioritized using Excel Software. Based on the obtained results, the operation of the furnaces in hot rolling process with the score 1, effluent derived from hot rolling process with the score 0.565, nonprincipal disposal and dumping of waste at the plant enclosure with the score 0.335, walking beam process with the score 1.483 respectively allocated themselves the highest priority in terms of air, water, soil and noise pollution. In terms of habitats, land cover and socioeconomic and cultural environment, closeness to the forest area and the existence of four groups of wildlife with the score 1.106 and proximity of villages and residential areas to the plant with the score 3.771 respectively enjoyed the highest priorities while impressibility and occupational accidents with the score 2.725 and cutting and welding operations with score 2.134 had the highest priority among health and safety criteria. Finally, strategies for the control of pollution sources were identified and Training, Monitoring and environmental management plan of the SPC was prepared.

  7. The MYStIX Infrared-Excess Source Catalog

    NASA Astrophysics Data System (ADS)

    Povich, Matthew S.; Kuhn, Michael A.; Getman, Konstantin V.; Busk, Heather A.; Feigelson, Eric D.; Broos, Patrick S.; Townsley, Leisa K.; King, Robert R.; Naylor, Tim

    2013-12-01

    The Massive Young Star-Forming Complex Study in Infrared and X-rays (MYStIX) project provides a comparative study of 20 Galactic massive star-forming complexes (d = 0.4-3.6 kpc). Probable stellar members in each target complex are identified using X-ray and/or infrared data via two pathways: (1) X-ray detections of young/massive stars with coronal activity/strong winds or (2) infrared excess (IRE) selection of young stellar objects (YSOs) with circumstellar disks and/or protostellar envelopes. We present the methodology for the second pathway using Spitzer/IRAC, 2MASS, and UKIRT imaging and photometry. Although IRE selection of YSOs is well-trodden territory, MYStIX presents unique challenges. The target complexes range from relatively nearby clouds in uncrowded fields located toward the outer Galaxy (e.g., NGC 2264, the Flame Nebula) to more distant, massive complexes situated along complicated, inner Galaxy sightlines (e.g., NGC 6357, M17). We combine IR spectral energy distribution (SED) fitting with IR color cuts and spatial clustering analysis to identify IRE sources and isolate probable YSO members in each MYStIX target field from the myriad types of contaminating sources that can resemble YSOs: extragalactic sources, evolved stars, nebular knots, and even unassociated foreground/background YSOs. Applying our methodology consistently across 18 of the target complexes, we produce the MYStIX IRE Source (MIRES) Catalog comprising 20,719 sources, including 8686 probable stellar members of the MYStIX target complexes. We also classify the SEDs of 9365 IR counterparts to MYStIX X-ray sources to assist the first pathway, the identification of X-ray-detected stellar members. The MIRES Catalog provides a foundation for follow-up studies of diverse phenomena related to massive star cluster formation, including protostellar outflows, circumstellar disks, and sequential star formation triggered by massive star feedback processes.

  8. Physical bases of the generation of short-term earthquake precursors: A complex model of ionization-induced geophysical processes in the lithosphere-atmosphere-ionosphere-magnetosphere system

    NASA Astrophysics Data System (ADS)

    Pulinets, S. A.; Ouzounov, D. P.; Karelin, A. V.; Davidenko, D. V.

    2015-07-01

    This paper describes the current understanding of the interaction between geospheres from a complex set of physical and chemical processes under the influence of ionization. The sources of ionization involve the Earth's natural radioactivity and its intensification before earthquakes in seismically active regions, anthropogenic radioactivity caused by nuclear weapon testing and accidents in nuclear power plants and radioactive waste storage, the impact of galactic and solar cosmic rays, and active geophysical experiments using artificial ionization equipment. This approach treats the environment as an open complex system with dissipation, where inherent processes can be considered in the framework of the synergistic approach. We demonstrate the synergy between the evolution of thermal and electromagnetic anomalies in the Earth's atmosphere, ionosphere, and magnetosphere. This makes it possible to determine the direction of the interaction process, which is especially important in applications related to short-term earthquake prediction. That is why the emphasis in this study is on the processes proceeding the final stage of earthquake preparation; the effects of other ionization sources are used to demonstrate that the model is versatile and broadly applicable in geophysics.

  9. Decision Analysis for Environmental Problems

    EPA Science Inventory

    Environmental management problems are often complex and uncertain. A formal process with proper guidance is needed to understand the issues, identify sources of disagreement, and analyze the major uncertainties in environmental problems. This course will present a process that fo...

  10. Field-testing a new directional passive air sampler for fugitive dust in a complex industrial source environment.

    PubMed

    Ferranti, E J S; Fryer, M; Sweetman, A J; Garcia, M A Solera; Timmis, R J

    2014-01-01

    Quantifying the sources of fugitive dusts on complex industrial sites is essential for regulation and effective dust management. This study applied two recently-patented Directional Passive Air Samplers (DPAS) to measure the fugitive dust contribution from a Metal Recovery Plant (MRP) located on the periphery of a major steelworks site. The DPAS can collect separate samples for winds from different directions (12 × 30° sectors), and the collected dust may be quantified using several different measurement methods. The DPASs were located up and down-prevailing-wind of the MRP processing area to (i) identify and measure the contribution made by the MRP processing operation; (ii) monitor this contribution during the processing of a particularly dusty material; and (iii) detect any changes to this contribution following new dust-control measures. Sampling took place over a 12-month period and the amount of dust was quantified using photographic, magnetic and mass-loading measurement methods. The DPASs are able to effectively resolve the incoming dust signal from the wider steelworks complex, and also different sources of fugitive dust from the MRP processing area. There was no confirmable increase in the dust contribution from the MRP during the processing of a particularly dusty material, but dust levels significantly reduced following the introduction of new dust-control measures. This research was undertaken in a regulatory context, and the results provide a unique evidence-base for current and future operational or regulatory decisions.

  11. Risk Management in Complex Construction Projects that Apply Renewable Energy Sources: A Case Study of the Realization Phase of the Energis Educational and Research Intelligent Building

    NASA Astrophysics Data System (ADS)

    Krechowicz, Maria

    2017-10-01

    Nowadays, one of the characteristic features of construction industry is an increased complexity of a growing number of projects. Almost each construction project is unique, has its project-specific purpose, its own project structural complexity, owner’s expectations, ground conditions unique to a certain location, and its own dynamics. Failure costs and costs resulting from unforeseen problems in complex construction projects are very high. Project complexity drivers pose many vulnerabilities to a successful completion of a number of projects. This paper discusses the process of effective risk management in complex construction projects in which renewable energy sources were used, on the example of the realization phase of the ENERGIS teaching-laboratory building, from the point of view of DORBUD S.A., its general contractor. This paper suggests a new approach to risk management for complex construction projects in which renewable energy sources were applied. The risk management process was divided into six stages: gathering information, identification of the top, critical project risks resulting from the project complexity, construction of the fault tree for each top, critical risks, logical analysis of the fault tree, quantitative risk assessment applying fuzzy logic and development of risk response strategy. A new methodology for the qualitative and quantitative risk assessment for top, critical risks in complex construction projects was developed. Risk assessment was carried out applying Fuzzy Fault Tree analysis on the example of one top critical risk. Application of the Fuzzy sets theory to the proposed model allowed to decrease uncertainty and eliminate problems with gaining the crisp values of the basic events probability, common during expert risk assessment with the objective to give the exact risk score of each unwanted event probability.

  12. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  13. Distributed Coding of Compressively Sensed Sources

    NASA Astrophysics Data System (ADS)

    Goukhshtein, Maxim

    In this work we propose a new method for compressing multiple correlated sources with a very low-complexity encoder in the presence of side information. Our approach uses ideas from compressed sensing and distributed source coding. At the encoder, syndromes of the quantized compressively sensed sources are generated and transmitted. The decoder uses side information to predict the compressed sources. The predictions are then used to recover the quantized measurements via a two-stage decoding process consisting of bitplane prediction and syndrome decoding. Finally, guided by the structure of the sources and the side information, the sources are reconstructed from the recovered measurements. As a motivating example, we consider the compression of multispectral images acquired on board satellites, where resources, such as computational power and memory, are scarce. Our experimental results exhibit a significant improvement in the rate-distortion trade-off when compared against approaches with similar encoder complexity.

  14. Brain electric correlates of strong belief in paranormal phenomena: intracerebral EEG source and regional Omega complexity analyses.

    PubMed

    Pizzagalli, D; Lehmann, D; Gianotti, L; Koenig, T; Tanaka, H; Wackermann, J; Brugger, P

    2000-12-22

    The neurocognitive processes underlying the formation and maintenance of paranormal beliefs are important for understanding schizotypal ideation. Behavioral studies indicated that both schizotypal and paranormal ideation are based on an overreliance on the right hemisphere, whose coarse rather than focussed semantic processing may favor the emergence of 'loose' and 'uncommon' associations. To elucidate the electrophysiological basis of these behavioral observations, 35-channel resting EEG was recorded in pre-screened female strong believers and disbelievers during resting baseline. EEG data were subjected to FFT-Dipole-Approximation analysis, a reference-free frequency-domain dipole source modeling, and Regional (hemispheric) Omega Complexity analysis, a linear approach estimating the complexity of the trajectories of momentary EEG map series in state space. Compared to disbelievers, believers showed: more right-located sources of the beta2 band (18.5-21 Hz, excitatory activity); reduced interhemispheric differences in Omega complexity values; higher scores on the Magical Ideation scale; more general negative affect; and more hypnagogic-like reveries after a 4-min eyes-closed resting period. Thus, subjects differing in their declared paranormal belief displayed different active, cerebral neural populations during resting, task-free conditions. As hypothesized, believers showed relatively higher right hemispheric activation and reduced hemispheric asymmetry of functional complexity. These markers may constitute the neurophysiological basis for paranormal and schizotypal ideation.

  15. Enhanced biological phosphorus removal with different carbon sources.

    PubMed

    Shen, Nan; Zhou, Yan

    2016-06-01

    Enhanced biological phosphorus removal (EBPR) process is one of the most economical and sustainable methods for phosphorus removal from wastewater. However, the performance of EBPR can be affected by available carbon sources types in the wastewater that may induce different functional microbial communities in the process. Glycogen accumulating organisms (GAOs) and polyphosphate accumulating organisms (PAOs) are commonly found by coexisting in the EBPR process. Predominance of GAO population may lead to EBPR failure due to the competition on carbon source with PAO without contributing phosphorus removal. Carbon sources indeed play an important role in alteration of PAOs and GAOs in EBPR processes. Various types of carbon sources have been investigated for EBPR performance. Certain carbon sources tend to enrich specific groups of GAOs and/or PAOs. This review summarizes the types of carbon sources applied in EBPR systems and highlights the roles of these carbon sources in PAO and GAO competition. Both single (e.g., acetate, propionate, glucose, ethanol, and amino acid) and complex carbon sources (e.g., yeast extract, peptone, and mixed carbon sources) are discussed in this review. Meanwhile, the environmental friendly and economical carbon sources that are derived from waste materials, such as crude glycerol and wasted sludge, are also discussed and compared.

  16. Infrasonic tremor wavefield of the Pu`u `Ō`ō crater complex and lava tube system, Hawaii, in April 2007

    NASA Astrophysics Data System (ADS)

    Matoza, Robin S.; Fee, David; GarcéS, Milton A.

    2010-12-01

    Long-lived effusive volcanism at the Pu`u `Ō`ō crater complex, Kilauea Volcano, Hawaii produces persistent infrasonic tremor that has been recorded almost continuously for months to years. Previous studies showed that this infrasonic tremor wavefield can be recorded at a range of >10 km. However, the low signal power of this tremor relative to ambient noise levels results in significant propagation effects on signal detectability at this range. In April 2007, we supplemented a broadband infrasound array at ˜12.5 km from Pu`u `Ō`ō (MENE) with a similar array at ˜2.4 km from the source (KIPU). The additional closer-range data enable further evaluation of tropospheric propagation effects and provide higher signal-to-noise ratios for studying volcanic source processes. The infrasonic tremor source appears to consist of at least two separate physical processes. We suggest that bubble cloud oscillation in a roiling magma conduit beneath the crater complex may produce a broadband component of the tremor. Low-frequency sound sourced in a shallow magma conduit may radiate infrasound efficiently into the atmosphere due to the anomalous transparency of the magma-air interface. We further propose that more sharply peaked tones with complex temporal evolution may result from oscillatory interactions of a low-velocity gas jet with solid vent boundaries in a process analogous to the hole tone or whistler nozzle. The infrasonic tremor arrives with a median azimuth of ˜67° at KIPU. Additional infrasonic signals and audible sounds originating from the extended lava tube system to the south of the crater complex (median azimuth ˜77°) coincided with turbulent degassing activity at a new lava tube skylight. Our observations indicate that acoustic studies may aid in understanding persistent continuous degassing and unsteady flow dynamics at Kilauea Volcano.

  17. Experimental testing of the noise-canceling processor.

    PubMed

    Collins, Michael D; Baer, Ralph N; Simpson, Harry J

    2011-09-01

    Signal-processing techniques for localizing an acoustic source buried in noise are tested in a tank experiment. Noise is generated using a discrete source, a bubble generator, and a sprinkler. The experiment has essential elements of a realistic scenario in matched-field processing, including complex source and noise time series in a waveguide with water, sediment, and multipath propagation. The noise-canceling processor is found to outperform the Bartlett processor and provide the correct source range for signal-to-noise ratios below -10 dB. The multivalued Bartlett processor is found to outperform the Bartlett processor but not the noise-canceling processor. © 2011 Acoustical Society of America

  18. In-Source Fragmentation and the Sources of Partially Tryptic Peptides in Shotgun Proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jong-Seo; Monroe, Matthew E.; Camp, David G.

    2013-02-01

    Partially tryptic peptides are often identified in shotgun proteomics using trypsin as the proteolytic enzyme; however, it has been controversial regarding the sources of such partially tryptic peptides. Herein we investigate the impact of in-source fragmentation on shotgun proteomics using three biological samples, including a standard protein mixture, a mouse brain tissue homogenate, and a mouse plasma sample. Since the in-source fragments of a peptide retain the same elution time with its parent fully tryptic peptide, the partially tryptic peptides from in-source fragmentation can be distinguished from the other partially tryptic peptides by plotting the observed retention times against themore » computationally predicted retention times. Most partially tryptic in-source fragmentation artifacts were misaligned from the linear distribution of fully tryptic peptides. The impact of in-source fragmentation on peptide identifications was clearly significant in a less complex sample such as a standard protein digest, where ~60 % of unique peptides were observed as partially tryptic peptides from in-source fragmentation. In mouse brain or mouse plasma samples, in-source fragmentation contributed to 1-3 % of all identified peptides. The other major source of partially tryptic peptides in complex biological samples is presumably proteolytic processing by endogenous proteases in the samples. By filtering out the in-source fragmentation artifacts from the identified partially tryptic or non-tryptic peptides, it is possible to directly survey in-vivo proteolytic processing in biological samples such as blood plasma.« less

  19. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    NASA Astrophysics Data System (ADS)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling process, (3) deriving and using informative priors in sediment fingerprinting context and (4) transparency of the process and replication of model results by other users.

  20. Applying to Higher Education: Information Sources and Choice Factors

    ERIC Educational Resources Information Center

    Simoes, Claudia; Soares, Ana Maria

    2010-01-01

    Higher education institutions are facing increasingly complex challenges, which demand a deeper understanding of the sources prospective students use when applying to a higher education institution. This research centres on students' decision-making process for higher education institutions, focusing on the pre-purchase period, and, in particular,…

  1. Adaptive evolution of complex innovations through stepwise metabolic niche expansion.

    PubMed

    Szappanos, Balázs; Fritzemeier, Jonathan; Csörgő, Bálint; Lázár, Viktória; Lu, Xiaowen; Fekete, Gergely; Bálint, Balázs; Herczeg, Róbert; Nagy, István; Notebaart, Richard A; Lercher, Martin J; Pál, Csaba; Papp, Balázs

    2016-05-20

    A central challenge in evolutionary biology concerns the mechanisms by which complex metabolic innovations requiring multiple mutations arise. Here, we propose that metabolic innovations accessible through the addition of a single reaction serve as stepping stones towards the later establishment of complex metabolic features in another environment. We demonstrate the feasibility of this hypothesis through three complementary analyses. First, using genome-scale metabolic modelling, we show that complex metabolic innovations in Escherichia coli can arise via changing nutrient conditions. Second, using phylogenetic approaches, we demonstrate that the acquisition patterns of complex metabolic pathways during the evolutionary history of bacterial genomes support the hypothesis. Third, we show how adaptation of laboratory populations of E. coli to one carbon source facilitates the later adaptation to another carbon source. Our work demonstrates how complex innovations can evolve through series of adaptive steps without the need to invoke non-adaptive processes.

  2. Adaptive evolution of complex innovations through stepwise metabolic niche expansion

    PubMed Central

    Szappanos, Balázs; Fritzemeier, Jonathan; Csörgő, Bálint; Lázár, Viktória; Lu, Xiaowen; Fekete, Gergely; Bálint, Balázs; Herczeg, Róbert; Nagy, István; Notebaart, Richard A.; Lercher, Martin J.; Pál, Csaba; Papp, Balázs

    2016-01-01

    A central challenge in evolutionary biology concerns the mechanisms by which complex metabolic innovations requiring multiple mutations arise. Here, we propose that metabolic innovations accessible through the addition of a single reaction serve as stepping stones towards the later establishment of complex metabolic features in another environment. We demonstrate the feasibility of this hypothesis through three complementary analyses. First, using genome-scale metabolic modelling, we show that complex metabolic innovations in Escherichia coli can arise via changing nutrient conditions. Second, using phylogenetic approaches, we demonstrate that the acquisition patterns of complex metabolic pathways during the evolutionary history of bacterial genomes support the hypothesis. Third, we show how adaptation of laboratory populations of E. coli to one carbon source facilitates the later adaptation to another carbon source. Our work demonstrates how complex innovations can evolve through series of adaptive steps without the need to invoke non-adaptive processes. PMID:27197754

  3. A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.

    PubMed

    Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco

    2018-01-01

    Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Rupture Dynamics and Ground Motion from Earthquakes in Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Bydlon, S.; Dunham, E. M.; Kozdon, J. E.

    2012-12-01

    Heterogeneities in the material properties of Earth's crust scatter propagating seismic waves. The effects of scattered waves are reflected in the seismic coda and depend on the relative strength of the heterogeneities, spatial arrangement, and distance from source to receiver. In the vicinity of the fault, scattered waves influence the rupture process by introducing fluctuations in the stresses driving propagating ruptures. Further variability in the rupture process is introduced by naturally occurring geometric complexity of fault surfaces, and the stress changes that accompany slip on rough surfaces. We have begun a modeling effort to better understand the origin of complexity in the earthquake source process, and to quantify the relative importance of source complexity and scattering along the propagation path in causing incoherence of high frequency ground motion. To do this we extended our two-dimensional high order finite difference rupture dynamics code to accommodate material heterogeneities. We generate synthetic heterogeneous media using Von Karman correlation functions and their associated power spectral density functions. We then nucleate ruptures on either flat or rough faults, which obey strongly rate-weakening friction laws. Preliminary results for flat faults with uniform frictional properties and initial stresses indicate that off-fault material heterogeneity alone can lead to a complex rupture process. Our simulations reveal the excitation of high frequency bursts of waves, which radiate energy away from the propagating rupture. The average rupture velocity is thus reduced relative to its value in simulations employing homogeneous material properties. In the coming months, we aim to more fully explore parameter space by varying the correlation length, Hurst exponent, and amplitude of medium heterogeneities, as well as the statistical properties characterizing fault roughness.

  5. Demodulation processes in auditory perception

    NASA Astrophysics Data System (ADS)

    Feth, Lawrence L.

    1994-08-01

    The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.

  6. A new route for the synthesis of titanium silicalite-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasile, Aurelia, E-mail: aurelia_vasile@yahoo.com; Busuioc-Tomoiaga, Alina Maria; Catalysis Research Department, ChemPerformance SRL, Iasi 700337

    2012-01-15

    Graphical abstract: Well-prepared TS-1 was synthesized by an innovative procedure using inexpensive reagents such as fumed silica and TPABr as structure-directing agent. This is the first time when highly crystalline TS-1 is obtained in basic medium, using sodium hydroxide as HO{sup -} ion source required for the crystallization process. Hydrolysis of titanium source has been prevented by titanium complexation with acetylacetone before structuring gel. Highlights: Black-Right-Pointing-Pointer TS-1 was obtained using cheap reagents as fumed silica and tetrapropylammonium bromide. Black-Right-Pointing-Pointer First time NaOH was used as source of OH{sup -} ions required for crystallization process. Black-Right-Pointing-Pointer The hydrolysis Ti alkoxides wasmore » controlled by Ti complexation with 2,4-pentanedione. -- Abstract: A new and efficient route using inexpensive reagents such as fumed silica and tetrapropylammonium bromide is proposed for the synthesis of titanium silicalite-1. High crystalline titanium silicalite-1 was obtained in alkaline medium, using sodium hydroxide as HO{sup -} ion source required for the crystallization process. Hydrolysis of titanium source with formation of insoluble oxide species was prevented by titanium complexation with before structuring gel. The final solids were fully characterized by powder X-ray diffraction, scanning electron microscopy, Fourier transform infrared, ultraviolet-visible diffuse reflectance, Raman and atomic absorption spectroscopies, as well as nitrogen sorption analysis. It was found that a molar ratio Ti:Si of about 0.04 in the initial reaction mixture is the upper limit to which well formed titanium silicalite-1 with channels free of crystalline or amorphous material can be obtained. Above this value, solids with MFI type structure containing both Ti isomorphously substituted in the network and extralattice anatase nanoparticles inside of channels is formed.« less

  7. Methodology of decreasing software complexity using ontology

    NASA Astrophysics Data System (ADS)

    DÄ browska-Kubik, Katarzyna

    2015-09-01

    In this paper a model of web application`s source code, based on the OSD ontology (Ontology for Software Development), is proposed. This model is applied to implementation and maintenance phase of software development process through the DevOntoCreator tool [5]. The aim of this solution is decreasing software complexity of that source code, using many different maintenance techniques, like creation of documentation, elimination dead code, cloned code or bugs, which were known before [1][2]. Due to this approach saving on software maintenance costs of web applications will be possible.

  8. The Feed Materials Program of the Manhattan Project: A Foundational Component of the Nuclear Weapons Complex

    NASA Astrophysics Data System (ADS)

    Reed, B. Cameron

    2014-12-01

    The feed materials program of the Manhattan Project was responsible for procuring uranium-bearing ores and materials and processing them into forms suitable for use as source materials for the Project's uranium-enrichment factories and plutonium-producing reactors. This aspect of the Manhattan Project has tended to be overlooked in comparison with the Project's more dramatic accomplishments, but was absolutely vital to the success of those endeavors: without appropriate raw materials and the means to process them, nuclear weapons and much of the subsequent cold war would never have come to pass. Drawing from information available in Manhattan Engineer District Documents, this paper examines the sources and processing of uranium-bearing materials used in making the first nuclear weapons and how the feed materials program became a central foundational component of the postwar nuclear weapons complex.

  9. Complexity of Geometric Inductive Reasoning Tasks: Contribution to the Understanding of Fluid Intelligence.

    ERIC Educational Resources Information Center

    Primi, Ricardo

    2002-01-01

    Created two geometric inductive reasoning matrix tests by manipulating four sources of complexity orthogonally. Results for 313 undergraduates show that fluid intelligence is most strongly associated with the part of the central executive component of working memory that is related to controlled attention processing and selective encoding. (SLD)

  10. Modular thought in the circuit analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng

    2018-04-01

    Applied to solve the problem of modular thought, provides a whole for simplification's method, the complex problems have become of, and the study of circuit is similar to the above problems: the complex connection between components, make the whole circuit topic solution seems to be more complex, and actually components the connection between the have rules to follow, this article mainly tells the story of study on the application of the circuit modular thought. First of all, this paper introduces the definition of two-terminal network and the concept of two-terminal network equivalent conversion, then summarizes the common source resistance hybrid network modular approach, containing controlled source network modular processing method, lists the common module, typical examples analysis.

  11. Using real options analysis to support strategic management decisions

    NASA Astrophysics Data System (ADS)

    Kabaivanov, Stanimir; Markovska, Veneta; Milev, Mariyan

    2013-12-01

    Decision making is a complex process that requires taking into consideration multiple heterogeneous sources of uncertainty. Standard valuation and financial analysis techniques often fail to properly account for all these sources of risk as well as for all sources of additional flexibility. In this paper we explore applications of a modified binomial tree method for real options analysis (ROA) in an effort to improve decision making process. Usual cases of use of real options are analyzed with elaborate study on the applications and advantages that company management can derive from their application. A numeric results based on extending simple binomial tree approach for multiple sources of uncertainty are provided to demonstrate the improvement effects on management decisions.

  12. Pollutant Source Tracking (PST) Technical Guidance

    DTIC Science & Technology

    2011-12-01

    in the context of heavy metals (lead, copper), is considered to be a minor process contribution to the source fingerprint. 3.7 RAPID SCREENING...limits (summarized in Table 2) support the use of ICP-AES (ICP-OES) for heavy metal determination in soils , sediments, wastewater and other matrices...are included here. Isotopic ratios of stable isotopes of the metal of interest can be used for source identification and apportionment in complex

  13. Quantitative estimation of source complexity in tsunami-source inversion

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Cummins, Phil R.; Hawkins, Rhys; Jakir Hossen, M.

    2016-04-01

    This work analyses tsunami waveforms to infer the spatiotemporal evolution of sea-surface displacement (the tsunami source) caused by earthquakes or other sources. Since the method considers sea-surface displacement directly, no assumptions about the fault or seafloor deformation are required. While this approach has no ability to study seismic aspects of rupture, it greatly simplifies the tsunami source estimation, making it much less dependent on subjective fault and deformation assumptions. This results in a more accurate sea-surface displacement evolution in the source region. The spatial discretization is by wavelet decomposition represented by a trans-D Bayesian tree structure. Wavelet coefficients are sampled by a reversible jump algorithm and additional coefficients are only included when required by the data. Therefore, source complexity is consistent with data information (parsimonious) and the method can adapt locally in both time and space. Since the source complexity is unknown and locally adapts, no regularization is required, resulting in more meaningful displacement magnitudes. By estimating displacement uncertainties in a Bayesian framework we can study the effect of parametrization choice on the source estimate. Uncertainty arises from observation errors and limitations in the parametrization to fully explain the observations. As a result, parametrization choice is closely related to uncertainty estimation and profoundly affects inversion results. Therefore, parametrization selection should be included in the inference process. Our inversion method is based on Bayesian model selection, a process which includes the choice of parametrization in the inference process and makes it data driven. A trans-dimensional (trans-D) model for the spatio-temporal discretization is applied here to include model selection naturally and efficiently in the inference by sampling probabilistically over parameterizations. The trans-D process results in better uncertainty estimates since the parametrization adapts parsimoniously (in both time and space) according to the local data resolving power and the uncertainty about the parametrization choice is included in the uncertainty estimates. We apply the method to the tsunami waveforms recorded for the great 2011 Japan tsunami. All data are recorded on high-quality sensors (ocean-bottom pressure sensors, GPS gauges, and DART buoys). The sea-surface Green's functions are computed by JAGURS and include linear dispersion effects. By treating the noise level at each gauge as unknown, individual gauge contributions to the source estimate are appropriately and objectively weighted. The results show previously unreported detail of the source, quantify uncertainty spatially, and produce excellent data fits. The source estimate shows an elongated peak trench-ward from the hypo centre that closely follows the trench, indicating significant sea-floor deformation near the trench. Also notable is a bi-modal (negative to positive) displacement feature in the northern part of the source near the trench. The feature has ~2 m amplitude and is clearly resolved by the data with low uncertainties.

  14. Biomedically relevant chemical and physical properties of coal combustion products.

    PubMed Central

    Fisher, G L

    1983-01-01

    The evaluation of the potential public and occupational health hazards of developing and existing combustion processes requires a detailed understanding of the physical and chemical properties of effluents available for human and environmental exposures. These processes produce complex mixtures of gases and aerosols which may interact synergistically or antagonistically with biological systems. Because of the physicochemical complexity of the effluents, the biomedically relevant properties of these materials must be carefully assessed. Subsequent to release from combustion sources, environmental interactions further complicate assessment of the toxicity of combustion products. This report provides an overview of the biomedically relevant physical and chemical properties of coal fly ash. Coal fly ash is presented as a model complex mixture for health and safety evaluation of combustion processes. PMID:6337824

  15. Implications of Biospheric Energization

    NASA Astrophysics Data System (ADS)

    Budding, Edd; Demircan, Osman; Gündüz, Güngör; Emin Özel, Mehmet

    2016-07-01

    Our physical model relating to the origin and development of lifelike processes from very simple beginnings is reviewed. This molecular ('ABC') process is compared with the chemoton model, noting the role of the autocatalytic tuning to the time-dependent source of energy. This substantiates a Darwinian character to evolution. The system evolves from very simple beginnings to a progressively more highly tuned, energized and complex responding biosphere, that grows exponentially; albeit with a very low net growth factor. Rates of growth and complexity in the evolution raise disturbing issues of inherent stability. Autocatalytic processes can include a fractal character to their development allowing recapitulative effects to be observed. This property, in allowing similarities of pattern to be recognized, can be useful in interpreting complex (lifelike) systems.

  16. Fast and accurate detection of spread source in large complex networks.

    PubMed

    Paluch, Robert; Lu, Xiaoyan; Suchecki, Krzysztof; Szymański, Bolesław K; Hołyst, Janusz A

    2018-02-06

    Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all observers to find a solution. Here we propose a new approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. The original complexity of PTVA is O(N α ), where α ∈ (3,4) depends on the network topology and number of observers (N denotes the number of nodes in the network). Our Gradient Maximum Likelihood Algorithm (GMLA) reduces this complexity to O (N 2 log (N)). Extensive numerical tests performed on synthetic networks and real Gnutella network with limitation that id's of spreaders are unknown to observers demonstrate that for scale-free networks with such limitation GMLA yields higher quality localization results than PTVA does.

  17. Numerical Simulation of Pollutants' Transport and Fate in AN Unsteady Flow in Lower Bear River, Box Elder County, Utah

    NASA Astrophysics Data System (ADS)

    Salha, A. A.; Stevens, D. K.

    2013-12-01

    This study presents numerical application and statistical development of Stream Water Quality Modeling (SWQM) as a tool to investigate, manage, and research the transport and fate of water pollutants in Lower Bear River, Box elder County, Utah. The concerned segment under study is the Bear River starting from Cutler Dam to its confluence with the Malad River (Subbasin HUC 16010204). Water quality problems arise primarily from high phosphorus and total suspended sediment concentrations that were caused by five permitted point source discharges and complex network of canals and ducts of varying sizes and carrying capacities that transport water (for farming and agriculture uses) from Bear River and then back to it. Utah Department of Environmental Quality (DEQ) has designated the entire reach of the Bear River between Cutler Reservoir and Great Salt Lake as impaired. Stream water quality modeling (SWQM) requires specification of an appropriate model structure and process formulation according to nature of study area and purpose of investigation. The current model is i) one dimensional (1D), ii) numerical, iii) unsteady, iv) mechanistic, v) dynamic, and vi) spatial (distributed). The basic principle during the study is using mass balance equations and numerical methods (Fickian advection-dispersion approach) for solving the related partial differential equations. Model error decreases and sensitivity increases as a model becomes more complex, as such: i) uncertainty (in parameters, data input and model structure), and ii) model complexity, will be under investigation. Watershed data (water quality parameters together with stream flow, seasonal variations, surrounding landscape, stream temperature, and points/nonpoint sources) were obtained majorly using the HydroDesktop which is a free and open source GIS enabled desktop application to find, download, visualize, and analyze time series of water and climate data registered with the CUAHSI Hydrologic Information System. Processing, assessment of validity, and distribution of time-series data was explored using the GNU R language (statistical computing and graphics environment). Physical, chemical, and biological processes equations were written in FORTRAN codes (High Performance Fortran) in order to compute and solve their hyperbolic and parabolic complexities. Post analysis of results conducted using GNU R language. High performance computing (HPC) will be introduced to expedite solving complex computational processes using parallel programming. It is expected that the model will assess nonpoint sources and specific point sources data to understand pollutants' causes, transfer, dispersion, and concentration in different locations of Bear River. Investigation the impact of reduction/removal in non-point nutrient loading to Bear River water quality management could be addressed. Keywords: computer modeling; numerical solutions; sensitivity analysis; uncertainty analysis; ecosystem processes; high Performance computing; water quality.

  18. Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.

    PubMed

    Ding, Lei; Yuan, Han

    2013-04-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.

  19. Method for shallow junction formation

    DOEpatents

    Weiner, K.H.

    1996-10-29

    A doping sequence is disclosed that reduces the cost and complexity of forming source/drain regions in complementary metal oxide silicon (CMOS) integrated circuit technologies. The process combines the use of patterned excimer laser annealing, dopant-saturated spin-on glass, silicide contact structures and interference effects creates by thin dielectric layers to produce source and drain junctions that are ultrashallow in depth but exhibit low sheet and contact resistance. The process utilizes no photolithography and can be achieved without the use of expensive vacuum equipment. The process margins are wide, and yield loss due to contact of the ultrashallow dopants is eliminated. 8 figs.

  20. Method for shallow junction formation

    DOEpatents

    Weiner, Kurt H.

    1996-01-01

    A doping sequence that reduces the cost and complexity of forming source/drain regions in complementary metal oxide silicon (CMOS) integrated circuit technologies. The process combines the use of patterned excimer laser annealing, dopant-saturated spin-on glass, silicide contact structures and interference effects creates by thin dielectric layers to produce source and drain junctions that are ultrashallow in depth but exhibit low sheet and contact resistance. The process utilizes no photolithography and can be achieved without the use of expensive vacuum equipment. The process margins are wide, and yield loss due to contact of the ultrashallow dopants is eliminated.

  1. Variations in Recollection: The Effects of Complexity on Source Recognition

    ERIC Educational Resources Information Center

    Parks, Colleen M.; Murray, Linda J.; Elfman, Kane; Yonelinas, Andrew P.

    2011-01-01

    Whether recollection is a threshold or signal detection process is highly controversial, and the controversy has centered in part on the shape of receiver operating characteristics (ROCs) and z-transformed ROCs (zROCs). U-shaped zROCs observed in tests thought to rely heavily on recollection, such as source memory tests, have provided evidence in…

  2. ESIS ions injection, holding and extraction control system

    NASA Astrophysics Data System (ADS)

    Donets, E. D.; Donets, E. E.; Donets, D. E.; Lyuosev, D. A.; Ponkin, D. O.; Ramsdorf, A. Yu.; Boytsov, A. Yu.; Salnikov, V. V.; Shirikov, I. V.

    2018-04-01

    Electron string ion source (ESIS) KRION-6T is one of the main parts of the NICA injection complex [1]. During the work on creation of a new ion source for the NICA/MPD project the new ion motion control system was developed, produced and successfully put into operation. Modules development process and operation results are described.

  3. Octopus Cells in the Posteroventral Cochlear Nucleus Provide the Main Excitatory Input to the Superior Paraolivary Nucleus

    PubMed Central

    Felix II, Richard A.; Gourévitch, Boris; Gómez-Álvarez, Marcelo; Leijon, Sara C. M.; Saldaña, Enrique; Magnusson, Anna K.

    2017-01-01

    Auditory streaming enables perception and interpretation of complex acoustic environments that contain competing sound sources. At early stages of central processing, sounds are segregated into separate streams representing attributes that later merge into acoustic objects. Streaming of temporal cues is critical for perceiving vocal communication, such as human speech, but our understanding of circuits that underlie this process is lacking, particularly at subcortical levels. The superior paraolivary nucleus (SPON), a prominent group of inhibitory neurons in the mammalian brainstem, has been implicated in processing temporal information needed for the segmentation of ongoing complex sounds into discrete events. The SPON requires temporally precise and robust excitatory input(s) to convey information about the steep rise in sound amplitude that marks the onset of voiced sound elements. Unfortunately, the sources of excitation to the SPON and the impact of these inputs on the behavior of SPON neurons have yet to be resolved. Using anatomical tract tracing and immunohistochemistry, we identified octopus cells in the contralateral cochlear nucleus (CN) as the primary source of excitatory input to the SPON. Cluster analysis of miniature excitatory events also indicated that the majority of SPON neurons receive one type of excitatory input. Precise octopus cell-driven onset spiking coupled with transient offset spiking make SPON responses well-suited to signal transitions in sound energy contained in vocalizations. Targets of octopus cell projections, including the SPON, are strongly implicated in the processing of temporal sound features, which suggests a common pathway that conveys information critical for perception of complex natural sounds. PMID:28620283

  4. Fetal programming and environmental exposures: Implications for prenatal care and preterm birth

    EPA Science Inventory

    Fetal programming is an enormously complex process that relies on numerous environmental inputs from uterine tissue, the placenta, the maternal blood supply, and other sources. Recent evidence has made clear that the process is not based entirely on genetics, but rather on a deli...

  5. A stable isotope approach for source apportionment of chlorinated ethene plumes at a complex multi-contamination events urban site

    NASA Astrophysics Data System (ADS)

    Nijenhuis, Ivonne; Schmidt, Marie; Pellegatti, Eleonora; Paramatti, Enrico; Richnow, Hans Hermann; Gargini, Alessandro

    2013-10-01

    The stable carbon isotope composition of chlorinated aliphatic compounds such as chlorinated methanes, ethanes and ethenes was examined as an intrinsic fingerprint for apportionment of sources. A complex field site located in Ferrara (Italy), with more than 50 years history of use of chlorinated aliphatic compounds, was investigated in order to assess contamination sources. Several contamination plumes were found in a complex alluvial sandy multi-aquifer system close to the river Po; sources are represented by uncontained former industrial and municipal dump sites as well as by spills at industrial areas. The carbon stable isotope signature allowed distinguishing 2 major sources of contaminants. One source of chlorinated aliphatic contaminants was strongly depleted in 13C (<-60‰) suggesting production lines which have used depleted methane for synthesis. The other source had typical carbon isotope compositions of >-40‰ which is commonly observed in recent production of chlorinated solvents. The degradation processes in the plumes could be traced interpreting the isotope enrichment and depletion of parent and daughter compounds, respectively. We demonstrate that, under specific production conditions, namely when highly chlorinated ethenes are produced as by-product during chloromethanes production, 13C depleted fingerprinting of contaminants can be obtained and this can be used to track sources and address the responsible party of the pollution in urban areas.

  6. A stable isotope approach for source apportionment of chlorinated ethene plumes at a complex multi-contamination events urban site.

    PubMed

    Nijenhuis, Ivonne; Schmidt, Marie; Pellegatti, Eleonora; Paramatti, Enrico; Richnow, Hans Hermann; Gargini, Alessandro

    2013-10-01

    The stable carbon isotope composition of chlorinated aliphatic compounds such as chlorinated methanes, ethanes and ethenes was examined as an intrinsic fingerprint for apportionment of sources. A complex field site located in Ferrara (Italy), with more than 50years history of use of chlorinated aliphatic compounds, was investigated in order to assess contamination sources. Several contamination plumes were found in a complex alluvial sandy multi-aquifer system close to the river Po; sources are represented by uncontained former industrial and municipal dump sites as well as by spills at industrial areas. The carbon stable isotope signature allowed distinguishing 2 major sources of contaminants. One source of chlorinated aliphatic contaminants was strongly depleted in ¹³C (<-60‰) suggesting production lines which have used depleted methane for synthesis. The other source had typical carbon isotope compositions of >-40‰ which is commonly observed in recent production of chlorinated solvents. The degradation processes in the plumes could be traced interpreting the isotope enrichment and depletion of parent and daughter compounds, respectively. We demonstrate that, under specific production conditions, namely when highly chlorinated ethenes are produced as by-product during chloromethanes production, ¹³C depleted fingerprinting of contaminants can be obtained and this can be used to track sources and address the responsible party of the pollution in urban areas. © 2013 Elsevier B.V. All rights reserved.

  7. Self-Referential Information Alleviates Retrieval Inhibition of Directed Forgetting Effects-An ERP Evidence of Source Memory.

    PubMed

    Mao, Xinrui; Wang, Yujuan; Wu, Yanhong; Guo, Chunyan

    2017-01-01

    Directed forgetting (DF) assists in preventing outdated information from interfering with cognitive processing. Previous studies pointed that self-referential items alleviated DF effects due to the elaboration of encoding processes. However, the retrieval mechanism of this phenomenon remains unknown. Based on the dual-process framework of recognition, the retrieval of self-referential information was involved in familiarity and recollection. Using source memory tasks combined with event-related potential (ERP) recording, our research investigated the retrieval processes of alleviative DF effects elicited by self-referential information. The FN400 (frontal negativity at 400 ms) is a frontal potential at 300-500 ms related to familiarity and the late positive complex (LPC) is a later parietal potential at 500-800 ms related to recollection. The FN400 effects of source memory suggested that familiarity processes were promoted by self-referential effects without the modulation of to-be-forgotten (TBF) instruction. The ERP results of DF effects were involved with LPCs of source memory, which indexed retrieval processing of recollection. The other-referential source memory of TBF instruction caused the absence of LPC effects, while the self-referential source memory of TBF instruction still elicited the significant LPC effects. Therefore, our neural findings suggested that self-referential processing improved both familiarity and recollection. Furthermore, the self-referential processing advantage which was caused by the autobiographical retrieval alleviated retrieval inhibition of DF, supporting that the self-referential source memory alleviated DF effects.

  8. Self-Referential Information Alleviates Retrieval Inhibition of Directed Forgetting Effects—An ERP Evidence of Source Memory

    PubMed Central

    Mao, Xinrui; Wang, Yujuan; Wu, Yanhong; Guo, Chunyan

    2017-01-01

    Directed forgetting (DF) assists in preventing outdated information from interfering with cognitive processing. Previous studies pointed that self-referential items alleviated DF effects due to the elaboration of encoding processes. However, the retrieval mechanism of this phenomenon remains unknown. Based on the dual-process framework of recognition, the retrieval of self-referential information was involved in familiarity and recollection. Using source memory tasks combined with event-related potential (ERP) recording, our research investigated the retrieval processes of alleviative DF effects elicited by self-referential information. The FN400 (frontal negativity at 400 ms) is a frontal potential at 300–500 ms related to familiarity and the late positive complex (LPC) is a later parietal potential at 500–800 ms related to recollection. The FN400 effects of source memory suggested that familiarity processes were promoted by self-referential effects without the modulation of to-be-forgotten (TBF) instruction. The ERP results of DF effects were involved with LPCs of source memory, which indexed retrieval processing of recollection. The other-referential source memory of TBF instruction caused the absence of LPC effects, while the self-referential source memory of TBF instruction still elicited the significant LPC effects. Therefore, our neural findings suggested that self-referential processing improved both familiarity and recollection. Furthermore, the self-referential processing advantage which was caused by the autobiographical retrieval alleviated retrieval inhibition of DF, supporting that the self-referential source memory alleviated DF effects. PMID:29066962

  9. Serial femtosecond crystallography datasets from G protein-coupled receptors

    PubMed Central

    White, Thomas A.; Barty, Anton; Liu, Wei; Ishchenko, Andrii; Zhang, Haitao; Gati, Cornelius; Zatsepin, Nadia A.; Basu, Shibom; Oberthür, Dominik; Metz, Markus; Beyerlein, Kenneth R.; Yoon, Chun Hong; Yefanov, Oleksandr M.; James, Daniel; Wang, Dingjie; Messerschmidt, Marc; Koglin, Jason E.; Boutet, Sébastien; Weierstall, Uwe; Cherezov, Vadim

    2016-01-01

    We describe the deposition of four datasets consisting of X-ray diffraction images acquired using serial femtosecond crystallography experiments on microcrystals of human G protein-coupled receptors, grown and delivered in lipidic cubic phase, at the Linac Coherent Light Source. The receptors are: the human serotonin receptor 2B in complex with an agonist ergotamine, the human δ-opioid receptor in complex with a bi-functional peptide ligand DIPP-NH2, the human smoothened receptor in complex with an antagonist cyclopamine, and finally the human angiotensin II type 1 receptor in complex with the selective antagonist ZD7155. All four datasets have been deposited, with minimal processing, in an HDF5-based file format, which can be used directly for crystallographic processing with CrystFEL or other software. We have provided processing scripts and supporting files for recent versions of CrystFEL, which can be used to validate the data. PMID:27479354

  10. Serial femtosecond crystallography datasets from G protein-coupled receptors.

    PubMed

    White, Thomas A; Barty, Anton; Liu, Wei; Ishchenko, Andrii; Zhang, Haitao; Gati, Cornelius; Zatsepin, Nadia A; Basu, Shibom; Oberthür, Dominik; Metz, Markus; Beyerlein, Kenneth R; Yoon, Chun Hong; Yefanov, Oleksandr M; James, Daniel; Wang, Dingjie; Messerschmidt, Marc; Koglin, Jason E; Boutet, Sébastien; Weierstall, Uwe; Cherezov, Vadim

    2016-08-01

    We describe the deposition of four datasets consisting of X-ray diffraction images acquired using serial femtosecond crystallography experiments on microcrystals of human G protein-coupled receptors, grown and delivered in lipidic cubic phase, at the Linac Coherent Light Source. The receptors are: the human serotonin receptor 2B in complex with an agonist ergotamine, the human δ-opioid receptor in complex with a bi-functional peptide ligand DIPP-NH2, the human smoothened receptor in complex with an antagonist cyclopamine, and finally the human angiotensin II type 1 receptor in complex with the selective antagonist ZD7155. All four datasets have been deposited, with minimal processing, in an HDF5-based file format, which can be used directly for crystallographic processing with CrystFEL or other software. We have provided processing scripts and supporting files for recent versions of CrystFEL, which can be used to validate the data.

  11. Modeling the influence of coupled mass transfer processes on mass flux downgradient of heterogeneous DNAPL source zones

    NASA Astrophysics Data System (ADS)

    Yang, Lurong; Wang, Xinyu; Mendoza-Sanchez, Itza; Abriola, Linda M.

    2018-04-01

    Sequestered mass in low permeability zones has been increasingly recognized as an important source of organic chemical contamination that acts to sustain downgradient plume concentrations above regulated levels. However, few modeling studies have investigated the influence of this sequestered mass and associated (coupled) mass transfer processes on plume persistence in complex dense nonaqueous phase liquid (DNAPL) source zones. This paper employs a multiphase flow and transport simulator (a modified version of the modular transport simulator MT3DMS) to explore the two- and three-dimensional evolution of source zone mass distribution and near-source plume persistence for two ensembles of highly heterogeneous DNAPL source zone realizations. Simulations reveal the strong influence of subsurface heterogeneity on the complexity of DNAPL and sequestered (immobile/sorbed) mass distribution. Small zones of entrapped DNAPL are shown to serve as a persistent source of low concentration plumes, difficult to distinguish from other (sorbed and immobile dissolved) sequestered mass sources. Results suggest that the presence of DNAPL tends to control plume longevity in the near-source area; for the examined scenarios, a substantial fraction (43.3-99.2%) of plume life was sustained by DNAPL dissolution processes. The presence of sorptive media and the extent of sorption non-ideality are shown to greatly affect predictions of near-source plume persistence following DNAPL depletion, with plume persistence varying one to two orders of magnitude with the selected sorption model. Results demonstrate the importance of sorption-controlled back diffusion from low permeability zones and reveal the importance of selecting the appropriate sorption model for accurate prediction of plume longevity. Large discrepancies for both DNAPL depletion time and plume longevity were observed between 2-D and 3-D model simulations. Differences between 2- and 3-D predictions increased in the presence of sorption, especially for the case of non-ideal sorption, demonstrating the limitations of employing 2-D predictions for field-scale modeling.

  12. Understanding Self-Assessment as an Informed Process: Residents' Use of External Information for Self-Assessment of Performance in Simulated Resuscitations

    ERIC Educational Resources Information Center

    Plant, Jennifer L.; Corden, Mark; Mourad, Michelle; O'Brien, Bridget C.; van Schaik, Sandrijn M.

    2013-01-01

    ;Self-directed learning requires self-assessment of learning needs and performance, a complex process that requires collecting and interpreting data from various sources. Learners' approaches to self-assessment likely vary depending on the learner and the context. The aim of this study was to gain insight into how learners process external…

  13. Biodegradation of CuTETA, an effluent by-product in mineral processing.

    PubMed

    Cushing, Alexander M L; Kelebek, Sadan; Yue, Siqing; Ramsay, Juliana A

    2018-04-13

    Polyamines such as triethylenetetramine (TETA) and other amine chelators are used in mineral processing applications. Formation of heavy metal complexes of these reagents as a by-product in effluent water is a recent environmental concern. In this study, Paecilomyces sp. was enriched from soil on TETA as the sole source of carbon and nitrogen and was found to degrade > 96 and 90% CuTETA complexes at initial concentrations of 0.32 and 0.79 mM respectively, following 96-h incubation. After destabilization, most of the copper (> 78%) was complexed extracellularly and the rest was associated with the cell. Mass spectroscopy results provided confirmation that copper re-complexed with small, extracellular, and organic molecules. There are no reports in the literature that Paecilomyces or any other organism can grow on TETA or CuTETA. This study is the first to show that biological destabilization of CuTETA complexes in mineral processing effluents is feasible.

  14. Does Specific Instruction during Collecting from Several Sources Affect the Quality of the Written Text Product?

    ERIC Educational Resources Information Center

    Hilbig, Annemarie; Proske, Antje

    2014-01-01

    Although academic writing is a complex interplay of comprehending and producing text the aspect of collecting information from source texts is hardly addressed in writing research. This study examined the impact of instructions supporting the collection process on writing quality, as well as the role of prior motivation and computer experience.…

  15. Oxidative aging and secondary organic aerosol formation from simulated wildfire emissions

    Treesearch

    C. J. Hennigan; M. A. Miracolo; G. J. Engelhart; A. A. May; Cyle Wold; WeiMin Hao; T. Lee; A. P. Sullivan; J. B. Gilman; W. C. Kuster; J. A. de Gouw; J. L. Collett; S. M. Kreidenweis; A. L. Robinson

    2010-01-01

    Wildfires are a significant fraction of global biomass burning and a major source of trace gas and particle emissions in the atmosphere. Understanding the air quality and climate implications of wildfires is difficult since the emissions undergo complex transformations due to aging processes during transport away from the source. As part of the third Fire Lab at...

  16. Remember-Know and Source Memory Instructions Can Qualitatively Change Old-New Recognition Accuracy: The Modality-Match Effect in Recognition Memory

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Besken, Miri; Peterson, Daniel

    2010-01-01

    Remember-Know (RK) and source memory tasks were designed to elucidate processes underlying memory retrieval. As part of more complex judgments, both tests produce a measure of old-new recognition, which is typically treated as equivalent to that derived from a standard recognition task. The present study demonstrates, however, that recognition…

  17. Amanzi: An Open-Source Multi-process Simulator for Environmental Applications

    NASA Astrophysics Data System (ADS)

    Moulton, J. D.; Molins, S.; Johnson, J. N.; Coon, E.; Lipnikov, K.; Day, M.; Barker, E.

    2014-12-01

    The Advanced Simulation Capabililty for Environmental Management (ASCEM) program is developing an approach and open-source tool suite for standardized risk and performance assessments at legacy nuclear waste sites. These assessments begin with simplified models, and add geometric and geologic complexity as understanding is gained. The Platform toolsets (Akuna) generates these conceptual models and Amanzi provides the computational engine to perform the simulations, returning the results for analysis and visualization. In this presentation we highlight key elements of the design, algorithms and implementations used in Amanzi. In particular, the hierarchical and modular design is aligned with the coupled processes being sumulated, and naturally supports a wide range of model complexity. This design leverages a dynamic data manager and the synergy of two graphs (one from the high-level perspective of the models the other from the dependencies of the variables in the model) to enable this flexible model configuration at run time. Moreover, to model sites with complex hydrostratigraphy, as well as engineered systems, we are developing a dual unstructured/structured capability. Recently, these capabilities have been collected in a framework named Arcos, and efforts have begun to improve interoperability between the unstructured and structured AMR approaches in Amanzi. To leverage a range of biogeochemistry capability from the community (e.g., CrunchFlow, PFLOTRAN, etc.), a biogeochemistry interface library was developed called Alquimia. To ensure that Amanzi is truly an open-source community code we require a completely open-source tool chain for our development. We will comment on elements of this tool chain, including the testing and documentation development tools such as docutils, and Sphinx. Finally, we will show simulation results from our phased demonstrations, including the geochemically complex Savannah River F-Area seepage basins.

  18. Compton Reflection in AGN with Simbol-X

    NASA Astrophysics Data System (ADS)

    Beckmann, V.; Courvoisier, T. J.-L.; Gehrels, N.; Lubiński, P.; Malzac, J.; Petrucci, P. O.; Shrader, C. R.; Soldi, S.

    2009-05-01

    AGN exhibit complex hard X-ray spectra. Our current understanding is that the emission is dominated by inverse Compton processes which take place in the corona above the accretion disk, and that absorption and reflection in a distant absorber play a major role. These processes can be directly observed through the shape of the continuum, the Compton reflection hump around 30 keV, and the iron fluorescence line at 6.4 keV. We demonstrate the capabilities of Simbol-X to constrain complex models for cases like MCG-05-23-016, NGC 4151, NGC 2110, and NGC 4051 in short (10 ksec) observations. We compare the simulations with recent observations on these sources by INTEGRAL, Swift and Suzaku. Constraining reflection models for AGN with Simbol-X will help us to get a clear view of the processes and geometry near to the central engine in AGN, and will give insight to which sources are responsible for the Cosmic X-ray background at energies >20 keV.

  19. Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation

    PubMed Central

    Butler, Blake E.; Trainor, Laurel J.

    2012-01-01

    Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch. PMID:22740836

  20. Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.

    NASA Astrophysics Data System (ADS)

    Wang, Avery Li-Chun

    This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters which require a small fraction of the computational power of conventional FIR implementations. This design strategy is based on truncated and stabilized IIR filters. These signal-processing methods have been applied to the problem of auditory source separation, resulting in voice separation from complex music that is significantly better than previous results at far lower computational cost.

  1. Predictability decomposition detects the impairment of brain-heart dynamical networks during sleep disorders and their recovery with treatment

    NASA Astrophysics Data System (ADS)

    Faes, Luca; Marinazzo, Daniele; Stramaglia, Sebastiano; Jurysta, Fabrice; Porta, Alberto; Giandomenico, Nollo

    2016-05-01

    This work introduces a framework to study the network formed by the autonomic component of heart rate variability (cardiac process η) and the amplitude of the different electroencephalographic waves (brain processes δ, θ, α, σ, β) during sleep. The framework exploits multivariate linear models to decompose the predictability of any given target process into measures of self-, causal and interaction predictability reflecting respectively the information retained in the process and related to its physiological complexity, the information transferred from the other source processes, and the information modified during the transfer according to redundant or synergistic interaction between the sources. The framework is here applied to the η, δ, θ, α, σ, β time series measured from the sleep recordings of eight severe sleep apnoea-hypopnoea syndrome (SAHS) patients studied before and after long-term treatment with continuous positive airway pressure (CPAP) therapy, and 14 healthy controls. Results show that the full and self-predictability of η, δ and θ decreased significantly in SAHS compared with controls, and were restored with CPAP for δ and θ but not for η. The causal predictability of η and δ occurred through significantly redundant source interaction during healthy sleep, which was lost in SAHS and recovered after CPAP. These results indicate that predictability analysis is a viable tool to assess the modifications of complexity and causality of the cerebral and cardiac processes induced by sleep disorders, and to monitor the restoration of the neuroautonomic control of these processes during long-term treatment.

  2. Measuring orthographic transparency and morphological-syllabic complexity in alphabetic orthographies: a narrative review.

    PubMed

    Borleffs, Elisabeth; Maassen, Ben A M; Lyytinen, Heikki; Zwarts, Frans

    2017-01-01

    This narrative review discusses quantitative indices measuring differences between alphabetic languages that are related to the process of word recognition. The specific orthography that a child is acquiring has been identified as a central element influencing reading acquisition and dyslexia. However, the development of reliable metrics to measure differences between language scripts hasn't received much attention so far. This paper therefore reviews metrics proposed in the literature for quantifying orthographic transparency, syllabic complexity, and morphological complexity of alphabetic languages. The review included searches of Web of Science, PubMed, PsychInfo, Google Scholar, and various online sources. Search terms pertained to orthographic transparency, morphological complexity, and syllabic complexity in relation to reading acquisition, and dyslexia. Although the predictive value of these metrics is promising, more research is needed to validate the value of the metrics discussed and to understand the 'developmental footprint' of orthographic transparency, morphological complexity, and syllabic complexity in the lexical organization and processing strategies.

  3. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    NASA Astrophysics Data System (ADS)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems, particularly at the laboratory scale.

  4. Rupture processes of the 2010 Canterbury earthquake and the 2011 Christchurch earthquake inferred from InSAR, strong motion and teleseismic datasets

    NASA Astrophysics Data System (ADS)

    Yun, S.; Koketsu, K.; Aoki, Y.

    2014-12-01

    The September 4, 2010, Canterbury earthquake with a moment magnitude (Mw) of 7.1 is a crustal earthquake in the South Island, New Zealand. The February 22, 2011, Christchurch earthquake (Mw=6.3) is the biggest aftershock of the 2010 Canterbury earthquake that is located at about 50 km to the east of the mainshock. Both earthquakes occurred on previously unrecognized faults. Field observations indicate that the rupture of the 2010 Canterbury earthquake reached the surface; the surface rupture with a length of about 30 km is located about 4 km south of the epicenter. Also various data including the aftershock distribution and strong motion seismograms suggest a very complex rupture process. For these reasons it is useful to investigate the complex rupture process using multiple data with various sensitivities to the rupture process. While previously published source models are based on one or two datasets, here we infer the rupture process with three datasets, InSAR, strong-motion, and teleseismic data. We first performed point source inversions to derive the focal mechanism of the 2010 Canterbury earthquake. Based on the focal mechanism, the aftershock distribution, the surface fault traces and the SAR interferograms, we assigned several source faults. We then performed the joint inversion to determine the rupture process of the 2010 Canterbury earthquake most suitable for reproducing all the datasets. The obtained slip distribution is in good agreement with the surface fault traces. We also performed similar inversions to reveal the rupture process of the 2011 Christchurch earthquake. Our result indicates steep dip and large up-dip slip. This reveals the observed large vertical ground motion around the source region is due to the rupture process, rather than the local subsurface structure. To investigate the effects of the 3-D velocity structure on characteristic strong motion seismograms of the two earthquakes, we plan to perform the inversion taking 3-D velocity structure of this region into account.

  5. Optical radiation measurements: instrumentation and sources of error.

    PubMed

    Landry, R J; Andersen, F A

    1982-07-01

    Accurate measurement of optical radiation is required when sources of this radiation are used in biological research. The most difficult measurements of broadband noncoherent optical radiations usually must be performed by a highly trained specialist using sophisticated, complex, and expensive instruments. Presentation of the results of such measurement requires correct use of quantities and units with which many biological researchers are unfamiliar. The measurement process, physical quantities and units, measurement systems with instruments, and sources of error and uncertainties associated with optical radiation measurements are reviewed.

  6. The U.S. Army Functional Concept for Intelligence 2020-2040

    DTIC Science & Technology

    2017-02-01

    Soldiers to mitigate many complex problems of the future OE. Improved or new analytic processes will use very large data sets to address emerging...increasing. Army collection against publically available data sources may offer insights to social interconnectedness, political dynamics and complex... data used to support situational understanding. (5) Uncertainty and rapid change elevate the analytic risk associated with decision making and

  7. Unique Tremor observed coincident with the major emplacement phase of the September 2005 dike in Afar, Ethiopia

    NASA Astrophysics Data System (ADS)

    Ayele, A.; Keir, D.; Wright, T. J.; Ebinger, C. J.; Stuart, G. W.; Neuberg, J.

    2009-12-01

    The advent of digital and broadband seismic stations helped to capture the complex dynamics of earthquakes and volcanic sources processes ranging from high frequency microfractures to ultra long period transient signals. The September 2005 dike in the Afar depression of Ethiopia demonstrated to be one of the rare events of its kind to demonstrate the complex interaction of ambient tectonic stress, volcanic processes and dike intrusions. Unusually long period tremor in the range 18-20 seconds is observed by seismic stations located from ~ 350-700 km distance on 25 September, 2006 at about 14:00:00 GMT. This tremor sustain for about 30 minutes at FURI station. This time is coincident with the major emplacement phase of the dike beneath the Ado Ale Volcanic Complex (AVC before the small felsic eruption at Da’Ure in the afternoon of September 26, 2005. This tremor sustain for about 30 minutes at FURI station. The preliminary interpretation of this observation is postulated to be a highly pressurized magma source/reservoir breaking into the channel and its interaction with its deformable rock walls.

  8. Identification of major sources controlling groundwater chemistry from a hard rock terrain — A case study from Mettur taluk, Salem district, Tamil Nadu, India

    NASA Astrophysics Data System (ADS)

    Srinivasamoorthy, K.; Chidambaram, S.; Prasanna, M. V.; Vasanthavihar, M.; Peter, John; Anandhan, P.

    2008-02-01

    The study area Mettur forms an important industrial town situated NW of Salem district. The geology of the area is mainly composed of Archean crystalline metamorphic complexes. To identify the major process activated for controlling the groundwater chemistry an attempt has been made by collecting a total of 46 groundwater samples for two different seasons, viz., pre-monsoon and post-monsoon. The groundwater chemistry is dominated by silicate weathering and (Na + Mg) and (Cl + SO4) accounts of about 90% of cations and anions. The contribution of (Ca + Mg) and (Na + K) to total cations and HCO3 indicates the domination of silicate weathering as major sources for cations. The plot for Na to Cl indicates higher Cl in both seasons, derived from Anthropogenic (human) sources from fertilizer, road salt, human and animal waste, and industrial applications, minor representations of Na also indicates source from weathering of silicate-bearing minerals. The plot for Na/Cl to EC indicates Na released from silicate weathering process which is also supported by higher HCO3 values in both the seasons. Ion exchange process is also activated in the study area which is indicated by shifting to right in plot for Ca + Mg to SO4 + HCO3. The plot of Na-Cl to Ca + Mg-HCO3-SO4 confirms that Ca, Mg and Na concentrations in groundwater are derived from aquifer materials. Thermodynamic plot indicates that groundwater is in equilibrium with kaolinite, muscovite and chlorite minerals. Saturation index of silicate and carbonate minerals indicate oversaturation during pre-monsoon and undersaturation during post-monsoon, conforming dissolution and dilution process. In general, water chemistry is guided by complex weathering process, ion exchange along with influence of Cl ions from anthropogenic impact.

  9. Process Improvement and CMMI (registered trademark) -Developing Complex Systems- Using CMMI (registered trademark) to Achieve Effective Systems and Software Engineering Integration

    DTIC Science & Technology

    2008-11-01

    1952) Microwave (1953) Cell Phone (1983) PC (1975) Source: Rich Kaplan, Microsoft Internet (1975) 90 80 70 60 50 40 30 20 10 0 100 Percentage of O w...nership No. of Years Since Invention Source: Rich Kaplan, Microsoft Automobile = 56 years Telephone = 36 years Television = 26 years Cell phone = 14

  10. FACTORS INFLUENCING LIGHT-INDUCED MORTALITY OF ENTEROCOCCI IN SEDIMENT SUSPENSIONS

    EPA Science Inventory

    Contamination of recreational waters by pathogenic microorganisms occurs through complex, poorly understood interactions involving variable microbial sources, hydrodynamic transport, arid microbial fate processes. Fecal indicator bacteria such as enterococci have been used to ass...

  11. OpenMS: a flexible open-source software platform for mass spectrometry data analysis.

    PubMed

    Röst, Hannes L; Sachsenberg, Timo; Aiche, Stephan; Bielow, Chris; Weisser, Hendrik; Aicheler, Fabian; Andreotti, Sandro; Ehrlich, Hans-Christian; Gutenbrunner, Petra; Kenar, Erhan; Liang, Xiao; Nahnsen, Sven; Nilse, Lars; Pfeuffer, Julianus; Rosenberger, George; Rurik, Marc; Schmitt, Uwe; Veit, Johannes; Walzer, Mathias; Wojnar, David; Wolski, Witold E; Schilling, Oliver; Choudhary, Jyoti S; Malmström, Lars; Aebersold, Ruedi; Reinert, Knut; Kohlbacher, Oliver

    2016-08-30

    High-resolution mass spectrometry (MS) has become an important tool in the life sciences, contributing to the diagnosis and understanding of human diseases, elucidating biomolecular structural information and characterizing cellular signaling networks. However, the rapid growth in the volume and complexity of MS data makes transparent, accurate and reproducible analysis difficult. We present OpenMS 2.0 (http://www.openms.de), a robust, open-source, cross-platform software specifically designed for the flexible and reproducible analysis of high-throughput MS data. The extensible OpenMS software implements common mass spectrometric data processing tasks through a well-defined application programming interface in C++ and Python and through standardized open data formats. OpenMS additionally provides a set of 185 tools and ready-made workflows for common mass spectrometric data processing tasks, which enable users to perform complex quantitative mass spectrometric analyses with ease.

  12. Identification and Classification of Mass Transport Complexes in Offshore Trinidad/Venezuela and Their Potential Anthropogenic Impact as Tsunamigenic Hazards

    NASA Astrophysics Data System (ADS)

    Moscardelli, L.; Wood, L. J.

    2006-12-01

    Several late Pleistocene-age seafloor destabilization events have been identified in the continental margin of eastern offshore Trinidad, of sufficient scale to produce tsunamigenic forces. This area, situated along the obliquely-converging-boundary of the Caribbean/South American plates and proximal to the Orinoco Delta, is characterized by catastrophic shelf-margin processes, intrusive-extrusive mobile shales, and active tectonism. A mega-merged, 10,000km2, 3D seismic survey reveals several mass transport complexes that range in area from 11.3km2 to 2017km2. Historical records indicate that this region has experienced submarine landslide- generated tsunamigenic events, including tsunamis that affected Venezuela during the 1700's-1900's. This work concentrates on defining those ancient deep marine mass transport complexes whose occurrence could potentially triggered tsunamis. Three types of failures are identified; 1) source-attached failures that are fed by shelf edge deltas whose sediment input is controlled by sea-level fluctuations and sedimentation rates, 2) source-detached systems, which occur when upper slope sediments catastrophically fail due to gas hydrate disruptions and/or earthquakes, and 3) locally sourced failures, formed when local instabilities in the sea floor trigger relatively smaller collapses. Such classification of the relationship between slope mass failures and the sourcing regions enables a better understanding of the nature of initiation, length of development history and petrography of such mass transport deposits. Source-detached systems, generated due to sudden sediment remobilizations, are more likely to disrupt the overlying water column causing a rise in tsunamigenic risk. Unlike 2D seismic, 3D seismic enables scientists to calculate more accurate deposit volumes, improve deposit imaging and thus increase the accuracy of physical and computer simulations of mass failure processes.

  13. Bridging the Operational Divide: An Information-Processing Model of Internal Supply Chain Integration

    ERIC Educational Resources Information Center

    Rosado Feger, Ana L.

    2009-01-01

    Supply Chain Management, the coordination of upstream and downstream flows of product, services, finances, and information from a source to a customer, has risen in prominence over the past fifteen years. The delivery of a product to the consumer is a complex process requiring action from several independent entities. An individual firm consists…

  14. Sources of Difficulty in the Processing of Written Language. Report Series 4.3.

    ERIC Educational Resources Information Center

    Chafe, Wallace

    Ease of language processing varies with the nature of the language involved. Ordinary spoken language is the easiest kind to produce and understand, while writing is a relatively new development. On thoughtful inspection, the readability of writing has shown itself to be a complex topic requiring insights from many academic disciplines and…

  15. Effects of high-dose ethanol intoxication and hangover on cognitive flexibility.

    PubMed

    Wolff, Nicole; Gussek, Philipp; Stock, Ann-Kathrin; Beste, Christian

    2018-01-01

    The effects of high-dose ethanol intoxication on cognitive flexibility processes are not well understood, and processes related to hangover after intoxication have remained even more elusive. Similarly, it is unknown in how far the complexity of cognitive flexibility processes is affected by intoxication and hangover effects. We performed a neurophysiological study applying high density electroencephalography (EEG) recording to analyze event-related potentials (ERPs) and perform source localization in a task switching paradigm which varied the complexity of task switching by means of memory demands. The results show that high-dose ethanol intoxication only affects task switching (i.e. cognitive flexibility processes) when memory processes are required to control task switching mechanisms, suggesting that even high doses of ethanol compromise cognitive processes when they are highly demanding. The EEG and source localization data show that these effects unfold by modulating response selection processes in the anterior cingulate cortex. Perceptual and attentional selection processes as well as working memory processes were only unspecifically modulated. In all subprocesses examined, there were no differences between the sober and hangover states, thus suggesting a fast recovery of cognitive flexibility after high-dose ethanol intoxication. We assume that the gamma-aminobutyric acid (GABAergic) system accounts for the observed effects, while they can hardly be explained by the dopaminergic system. © 2016 Society for the Study of Addiction.

  16. Rupture Complexities of Fluid Induced Microseismic Events at the Basel EGS Project

    NASA Astrophysics Data System (ADS)

    Folesky, Jonas; Kummerow, Jörn; Shapiro, Serge A.; Häring, Markus; Asanuma, Hiroshi

    2016-04-01

    Microseismic data sets of excellent quality, such as the seismicity recorded in the Basel-1 enhanced geothermal system, Switzerland, in 2006-2007, provide the opportunity to analyse induced seismic events in great detail. It is important to understand in how far seismological insights on e.g. source and rupture processes are scale dependent and how they can be transferred to fluid induced micro-seismicity. We applied the empirical Green's function (EGF) method in order to reconstruct the relative source time functions of 195 suitable microseismic events from the Basel-1 reservoir. We found 93 solutions with a clear and consistent directivity pattern. The remaining events display either no measurable directivity, are unfavourably oriented or exhibit non consistent or complex relative source time functions. In this work we focus on selected events of M ˜ 1 which show possible rupture complexities. It is demonstrated that the EGF method allows to resolve complex rupture behaviour even if it is not directly identifiable in the seismograms. We find clear evidence of rupture directivity and multi-phase rupturing in the analysed relative source time functions. The time delays between consecutive subevents lies in the order of 10ms. Amplitudes of the relative source time functions of the subevents do not always show the same azimuthal dependence, indicating dissimilarity in the rupture directivity of the subevents. Our observations support the assumption that heterogeneity on fault surfaces persists down to small scale (few tens of meters).

  17. Electrical Neuroimaging of Music Processing Reveals Mid-Latency Changes with Level of Musical Expertise

    PubMed Central

    James, Clara E.; Oechslin, Mathias S.; Michel, Christoph M.; De Pretto, Michael

    2017-01-01

    This original research focused on the effect of musical training intensity on cerebral and behavioral processing of complex music using high-density event-related potential (ERP) approaches. Recently we have been able to show progressive changes with training in gray and white matter, and higher order brain functioning using (f)MRI [(functional) Magnetic Resonance Imaging], as well as changes in musical and general cognitive functioning. The current study investigated the same population of non-musicians, amateur pianists and expert pianists using spatio-temporal ERP analysis, by means of microstate analysis, and ERP source imaging. The stimuli consisted of complex musical compositions containing three levels of transgression of musical syntax at closure that participants appraised. ERP waveforms, microstates and underlying brain sources revealed gradual differences according to musical expertise in a 300–500 ms window after the onset of the terminal chords of the pieces. Within this time-window, processing seemed to concern context-based memory updating, indicated by a P3b-like component or microstate for which underlying sources were localized in the right middle temporal gyrus, anterior cingulate and right parahippocampal areas. Given that the 3 expertise groups were carefully matched for demographic factors, these results provide evidence of the progressive impact of training on brain and behavior. PMID:29163017

  18. Electrical Neuroimaging of Music Processing Reveals Mid-Latency Changes with Level of Musical Expertise.

    PubMed

    James, Clara E; Oechslin, Mathias S; Michel, Christoph M; De Pretto, Michael

    2017-01-01

    This original research focused on the effect of musical training intensity on cerebral and behavioral processing of complex music using high-density event-related potential (ERP) approaches. Recently we have been able to show progressive changes with training in gray and white matter, and higher order brain functioning using (f)MRI [(functional) Magnetic Resonance Imaging], as well as changes in musical and general cognitive functioning. The current study investigated the same population of non-musicians, amateur pianists and expert pianists using spatio-temporal ERP analysis, by means of microstate analysis, and ERP source imaging. The stimuli consisted of complex musical compositions containing three levels of transgression of musical syntax at closure that participants appraised. ERP waveforms, microstates and underlying brain sources revealed gradual differences according to musical expertise in a 300-500 ms window after the onset of the terminal chords of the pieces. Within this time-window, processing seemed to concern context-based memory updating, indicated by a P3b-like component or microstate for which underlying sources were localized in the right middle temporal gyrus, anterior cingulate and right parahippocampal areas. Given that the 3 expertise groups were carefully matched for demographic factors, these results provide evidence of the progressive impact of training on brain and behavior.

  19. Fine structure of 25 extragalactic radio sources. [interferometric observations of quasars

    NASA Technical Reports Server (NTRS)

    Wittels, J. J.; Knight, C. A.; Shapiro, I. I.; Hinteregger, H. F.; Rogers, A. E. E.; Whitney, A. R.; Clark, T. A.; Hutton, L. K.; Marandino, G. E.; Niell, A. E.

    1975-01-01

    Interferometric observations taken at 7.8 GHz (gamma approximately = 3.8 cm) with five pairings of antennae of 25 extragalactic radio sources between April, 1972 and May, 1973 are reported. These sources exhibit a broad variety of fine structure from very simple to complex. The total flux and the correlated flux of some of the sources underwent large changes in a few weeks, while the structure and total power of others remained constant during the entire period of observation. Some aspects of the data processing and a discussion of errors are presented. Numerous figures are provided and explained. The individual radio sources are described in detail.

  20. Geophysical study of the San Juan Mountains batholith complex, southwestern Colorado

    USGS Publications Warehouse

    Drenth, Benjamin J.; Keller, G. Randy; Thompson, Ren A.

    2012-01-01

    One of the largest and most pronounced gravity lows over North America is over the rugged San Juan Mountains of southwestern Colorado (USA). The mountain range is coincident with the San Juan volcanic field (SJVF), the largest erosional remnant of a widespread mid-Cenozoic volcanic field that spanned much of the southern Rocky Mountains. A buried, low-density silicic batholith complex related to the volcanic field has been the accepted interpretation of the source of the gravity low since the 1970s. However, this interpretation was based on gravity data processed with standard techniques that are problematic in the SJVF region. The combination of high-relief topography, topography with low densities, and the use of a common reduction density of 2670 kg/m3produces spurious large-amplitude gravity lows that may distort the geophysical signature of deeper features such as a batholith complex. We applied an unconventional processing procedure that uses geologically appropriate densities for the uppermost crust and digital topography to mostly remove the effect of the low-density units that underlie the topography associated with the SJVF. This approach resulted in a gravity map that provides an improved representation of deeper sources, including reducing the amplitude of the anomaly attributed to a batholith complex. We also reinterpreted vintage seismic refraction data that indicate the presence of low-velocity zones under the SJVF. Assuming that the source of the gravity low on the improved gravity anomaly map is the same as the source of the low seismic velocities, integrated modeling corroborates the interpretation of a batholith complex and then defines the dimensions and overall density contrast of the complex. Models show that the thickness of the batholith complex varies laterally to a significant degree, with the greatest thickness (∼20 km) under the western SJVF, and lesser thicknesses (<10 km) under the eastern SJVF. The largest group of nested calderas on the surface of the SJVF, the central caldera cluster, is not correlated with the thickest part of the batholith complex. This result is consistent with petrologic interpretations from recent studies that the batholith complex continued to be modified after cessation of volcanism and therefore is not necessarily representative of synvolcanic magma chambers. The total volume of the batholith complex is estimated to be 82,000–130,000 km3. The formation of such a large felsic batholith complex would inevitably involve production of a considerably greater volume of residuum, which could be present in the lower crust or uppermost mantle. The interpreted vertically averaged density contrast (–60 to –110 kg/m3), density (2590–2640 kg/m3), and seismic expression of the batholith complex are consistent with results of geophysical studies of other large batholiths in the western United States.

  1. Integrated Information Increases with Fitness in the Evolution of Animats

    PubMed Central

    Edlund, Jeffrey A.; Chaumont, Nicolas; Hintze, Arend; Koch, Christof; Tononi, Giulio; Adami, Christoph

    2011-01-01

    One of the hallmarks of biological organisms is their ability to integrate disparate information sources to optimize their behavior in complex environments. How this capability can be quantified and related to the functional complexity of an organism remains a challenging problem, in particular since organismal functional complexity is not well-defined. We present here several candidate measures that quantify information and integration, and study their dependence on fitness as an artificial agent (“animat”) evolves over thousands of generations to solve a navigation task in a simple, simulated environment. We compare the ability of these measures to predict high fitness with more conventional information-theoretic processing measures. As the animat adapts by increasing its “fit” to the world, information integration and processing increase commensurately along the evolutionary line of descent. We suggest that the correlation of fitness with information integration and with processing measures implies that high fitness requires both information processing as well as integration, but that information integration may be a better measure when the task requires memory. A correlation of measures of information integration (but also information processing) and fitness strongly suggests that these measures reflect the functional complexity of the animat, and that such measures can be used to quantify functional complexity even in the absence of fitness data. PMID:22028639

  2. Studying light-harvesting models with superconducting circuits.

    PubMed

    Potočnik, Anton; Bargerbos, Arno; Schröder, Florian A Y N; Khan, Saeed A; Collodo, Michele C; Gasparinetti, Simone; Salathé, Yves; Creatore, Celestino; Eichler, Christopher; Türeci, Hakan E; Chin, Alex W; Wallraff, Andreas

    2018-03-02

    The process of photosynthesis, the main source of energy in the living world, converts sunlight into chemical energy. The high efficiency of this process is believed to be enabled by an interplay between the quantum nature of molecular structures in photosynthetic complexes and their interaction with the environment. Investigating these effects in biological samples is challenging due to their complex and disordered structure. Here we experimentally demonstrate a technique for studying photosynthetic models based on superconducting quantum circuits, which complements existing experimental, theoretical, and computational approaches. We demonstrate a high degree of freedom in design and experimental control of our approach based on a simplified three-site model of a pigment protein complex with realistic parameters scaled down in energy by a factor of 10 5 . We show that the excitation transport between quantum-coherent sites disordered in energy can be enabled through the interaction with environmental noise. We also show that the efficiency of the process is maximized for structured noise resembling intramolecular phononic environments found in photosynthetic complexes.

  3. Acoustic simulation in architecture with parallel algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiaohong; Zhang, Xinrong; Li, Dan

    2004-03-01

    In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.

  4. Comprehensive chemical characterization of industrial PM2.5 from steel industry activities

    NASA Astrophysics Data System (ADS)

    Sylvestre, Alexandre; Mizzi, Aurélie; Mathiot, Sébastien; Masson, Fanny; Jaffrezo, Jean L.; Dron, Julien; Mesbah, Boualem; Wortham, Henri; Marchand, Nicolas

    2017-03-01

    Industrial sources are among the least documented PM (Particulate Matter) source in terms of chemical composition, which limits our understanding of their effective impact on ambient PM concentrations. We report 4 chemical emission profiles of PM2.5 for multiple activities located in a vast metallurgical complex. Emissions profiles were calculated as the difference of species concentrations between an upwind and a downwind site normalized by the absolute PM2.5 enrichment between both sites. We characterized the PM2.5 emissions profiles of the industrial activities related to the cast iron (complex 1) and the iron ore conversion processes (complex 2), as well as 2 storage areas: a blast furnace slag area (complex 3) and an ore terminal (complex 4). PM2.5 major fractions (Organic Carbon (OC) and Elemental Carbon (EC), major ions), organic markers as well as metals/trace elements are reported for the 4 industrial complexes. Among the trace elements, iron is the most emitted for the complex 1 (146.0 mg g-1 of PM2.5), the complex 2 (70.07 mg g-1) and the complex 3 (124.4 mg g-1) followed by Al, Mn and Zn. A strong emission of Polycyclic Aromatic Hydrocarbons (PAH), representing 1.3% of the Organic Matter (OM), is observed for the iron ore transformation complex (complex 2) which merges the activities of coke and iron sinter production and the blast furnace processes. In addition to unsubstituted PAHs, sulfur containing PAHs (SPAHs) are also significantly emitted (between 0.011 and 0.068 mg g-1) by the complex 2 and could become very useful organic markers of steel industry activities. For the complexes 1 and 2 (cast iron and iron ore converters), a strong fraction of sulfate ranging from 0.284 to 0.336 g g-1) and only partially neutralized by ammonium, is observed indicating that sulfates, if not directly emitted by the industrial activity, are formed very quickly in the plume. Emission from complex 4 (Ore terminal) are characterized by high contribution of Al (125.7 mg g-1 of PM2.5) but also, in a lesser extent, of Fe, Mn, Ti and Zn. We also highlighted high contribution of calcium ranging from 0.123 to 0.558 g g-1 for all of the industrial complexes under study. Since calcium is also widely used as a proxy of the dust contributions in source apportionment studies, our results suggest that this assumption should be reexamined in environments impacted by industrial emissions.

  5. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  6. Exploring the effects of photon correlations from thermal sources on bacterial photosynthesis

    NASA Astrophysics Data System (ADS)

    Manrique, Pedro D.; Caycedo-Soler, Felipe; De Mendoza, Adriana; Rodríguez, Ferney; Quiroga, Luis; Johnson, Neil F.

    Thermal light sources can produce photons with strong spatial correlations. We study the role that these correlations might potentially play in bacterial photosynthesis. Our findings show a relationship between the transversal distance between consecutive absorptions and the efficiency of the photosynthetic process. Furthermore, membranes where the clustering of core complexes (so-called RC-LH1) is high, display a range where the organism profits maximally from the spatial correlation of the incoming light. By contrast, no maximum is found for membranes with low core-core clustering. We employ a detailed membrane model with state-of-the-art empirical inputs. Our results suggest that the organization of the membrane's antenna complexes may be well-suited to the spatial correlations present in an natural light source. Future experiments will be needed to test this prediction.

  7. The role of the insula in intuitive expert bug detection in computer code: an fMRI study.

    PubMed

    Castelhano, Joao; Duarte, Isabel C; Ferreira, Carlos; Duraes, Joao; Madeira, Henrique; Castelo-Branco, Miguel

    2018-05-09

    Software programming is a complex and relatively recent human activity, involving the integration of mathematical, recursive thinking and language processing. The neural correlates of this recent human activity are still poorly understood. Error monitoring during this type of task, requiring the integration of language, logical symbol manipulation and other mathematical skills, is particularly challenging. We therefore aimed to investigate the neural correlates of decision-making during source code understanding and mental manipulation in professional participants with high expertise. The present fMRI study directly addressed error monitoring during source code comprehension, expert bug detection and decision-making. We used C code, which triggers the same sort of processing irrespective of the native language of the programmer. We discovered a distinct role for the insula in bug monitoring and detection and a novel connectivity pattern that goes beyond the expected activation pattern evoked by source code understanding in semantic language and mathematical processing regions. Importantly, insula activity levels were critically related to the quality of error detection, involving intuition, as signalled by reported initial bug suspicion, prior to final decision and bug detection. Activity in this salience network (SN) region evoked by bug suspicion was predictive of bug detection precision, suggesting that it encodes the quality of the behavioral evidence. Connectivity analysis provided evidence for top-down circuit "reutilization" stemming from anterior cingulate cortex (BA32), a core region in the SN that evolved for complex error monitoring such as required for this type of recent human activity. Cingulate (BA32) and anterolateral (BA10) frontal regions causally modulated decision processes in the insula, which in turn was related to activity of math processing regions in early parietal cortex. In other words, earlier brain regions used during evolution for other functions seem to be reutilized in a top-down manner for a new complex function, in an analogous manner as described for other cultural creations such as reading and literacy.

  8. Source complexity of the 1987 Whittier Narrows, California, earthquake from the inversion of strong motion records

    USGS Publications Warehouse

    Hartzell, S.; Iida, M.

    1990-01-01

    Strong motion records for the Whittier Narrows earthquake are inverted to obtain the history of slip. Both constant rupture velocity models and variable rupture velocity models are considered. The results show a complex rupture process within a relatively small source volume, with at least four separate concentrations of slip. Two sources are associated with the hypocenter, the larger having a slip of 55-90 cm, depending on the rupture model. These sources have a radius of approximately 2-3 km and are ringed by a region of reduced slip. The aftershocks fall within this low slip annulus. Other sources with slips from 40 to 70 cm each ring the central source region and the aftershock pattern. All the sources are predominantly thrust, although some minor right-lateral strike-slip motion is seen. The overall dimensions of the Whittier earthquake from the strong motion inversions is 10 km long (along the strike) and 6 km wide (down the dip). The preferred dip is 30?? and the preferred average rupture velocity is 2.5 km/s. Moment estimates range from 7.4 to 10.0 ?? 1024 dyn cm, depending on the rupture model. -Authors

  9. Noble gas composition of Indian carbonatites (Amba Dongar, Siriwasan): Implications on mantle source compositions and late-stage hydrothermal processes

    NASA Astrophysics Data System (ADS)

    Hopp, Jens; Viladkar, Shrinivas G.

    2018-06-01

    Within a stepwise crushing study we determined the noble gas composition of several calcite separates, one aegirine and one pyrochlore-aegirine separate of the carbonatite ring dyke complex of Amba Dongar and carbonatite sill complex of Siriwasan, India. Both carbonatites are related to the waning stages of volcanic activity of the Deccan Igneous Province ca. 65 Ma ago. Major observations are a clear radiogenic 4He* and nucleogenic 21Ne* imprint related to in situ production from U and Th in mineral impurities, most likely minute apatite grains, or late incorporation of crustal fluids. However, in first crushing steps of most calcites from Amba Dongar a well-resolvable mantle neon signal is observed, with lowest air-corrected mantle 21Ne/22Ne-compositions equivalent to the Réunion hotspot mantle source. In case of the aegirine separate from Siriwasan we found a neon composition similar to the Loihi hotspot mantle source. This transition from a mantle plume signal in first crushing step to a more nucleogenic signature with progressive crushing indicates the presence of an external (crustal) or in situ nucleogenic component unrelated and superposed to the initial mantle neon component whose composition is best approximated by results of first crushing step(s). This contradicts previous models of a lithospheric mantle source of the carbonatitic magmas from Amba Dongar containing recycled crustal components which base on nucleogenic neon compositions. Instead, the mantle source of both investigated carbonatite complexes is related to a primitive mantle plume source that we tentatively ascribe to the postulated Deccan mantle plume. If, as is commonly suggested, the present location of the Deccan mantle plume source is below Réunion Island, the currently observed more nucleogenic neon isotopic composition of the Réunion hotspot might be obliterated by significant upper mantle contributions. In addition, compared with other carbonatite complexes worldwide a rather significant contribution of atmospheric noble gases is observed. This is documented in cut-off 20Ne/22Ne-ratios of ca. 10.2 (Amba Dongar) and 10.45 (Siriwasan) and cut-off 40Ar/36Ar-ratios of about 1500. This atmospheric component had been added at shallow levels during the emplacement process or later during hydrothermal alteration. However, understanding the late-stage interaction between atmospheric gases and magmatic mantle fluids still requires further investigation.

  10. Sources of Individual Differences in L2 Narrative Production: The Contribution of Input, Processing, and Output Anxiety

    ERIC Educational Resources Information Center

    Trebits, Anna

    2016-01-01

    The aim of this study was to investigate the effects of cognitive task complexity and individual differences in input, processing, and output anxiety (IPOA) on L2 narrative production. The participants were enrolled in a bilingual secondary educational program. They performed two narrative tasks in speech and writing. The participants' level of…

  11. Formate as a CO surrogate for cascade processes: Rh-catalyzed cooperative decarbonylation and asymmetric Pauson-Khand-type cyclization reactions.

    PubMed

    Lee, Hang Wai; Chan, Albert S C; Kwong, Fuk Yee

    2007-07-07

    A rhodium-(S)-xyl-BINAP complex-catalyzed tandem formate decarbonylation and [2 + 2 + 1] carbonylative cyclization is described; this cooperative process utilizes formate as a condensed CO source, and the newly developed cascade protocol can be extended to its enantioselective version, providing up to 94% ee of the cyclopentenone adducts.

  12. Identifying the starting point of a spreading process in complex networks.

    PubMed

    Comin, Cesar Henrique; Costa, Luciano da Fontoura

    2011-11-01

    When dealing with the dissemination of epidemics, one important question that can be asked is the location where the contamination began. In this paper, we analyze three spreading schemes and propose and validate an effective methodology for the identification of the source nodes. The method is based on the calculation of the centrality of the nodes on the sampled network, expressed here by degree, betweenness, closeness, and eigenvector centrality. We show that the source node tends to have the highest measurement values. The potential of the methodology is illustrated with respect to three theoretical complex network models as well as a real-world network, the email network of the University Rovira i Virgili.

  13. NASCAP user's manual, 1978

    NASA Technical Reports Server (NTRS)

    Cassidy, J. J., III

    1978-01-01

    NASCAP simulates the charging process for a complex object in either tenuous plasma (geosynchronous orbit) or ground test (electron gun source) environment. Program control words, the structure of user input files, and various user options available are described in this computer programmer's user manual.

  14. Ambient Tropospheric Particles

    EPA Science Inventory

    Atmospheric particulate matter (PM) is a complex mixture of solid and liquid particles suspended in ambient air (also known as the atmospheric aerosol). Ambient PM arises from a wide-range of sources and/or processes, and consists of particles of different shapes, sizes, and com...

  15. Royal Society, Discussion on New Coal Chemistry, London, England, May 21, 22, 1980, Proceedings

    NASA Astrophysics Data System (ADS)

    1981-03-01

    A discussion of new coal chemistry is presented. The chemical and physical structure of coal is examined in the first section, including structural studies of coal extracts, metal and metal complexes in coal and coal microporosity. The second section presents new advances in applied coal technology. The development of liquid fuels and chemicals from coal is given especial emphasis, with papers on the Sasol Synthol process, the Shell-Koppers gasification process, liquefaction and gasification in Germany, the Solvent Refined Coal process, the Exxon Donor Solvent liquefaction process and the Mobil Methanol-to-Gasoline process. Finally, some developments that will be part of the future of coal chemistry in the year 2000 are examined in the third section, including coal-based chemical complexes and the use of coal as an alternative source to oil for chemical feedstocks.

  16. Balancing glycolysis and mitochondrial OXPHOS: lessons from the hematopoietic system and exercising muscles.

    PubMed

    Haran, Michal; Gross, Atan

    2014-11-01

    Living organisms require a constant supply of safe and efficient energy to maintain homeostasis and to allow locomotion of single cells, tissues and the entire organism. The source of energy can be glycolysis, a simple series of enzymatic reactions in the cytosol, or a much more complex process in the mitochondria, oxidative phosphorylation (OXPHOS). In this review we will examine how does the organism balance its source of energy in two seemingly distinct and unrelated processes: hematopoiesis and exercise. In both processes we will show the importance of the metabolic program and its regulation. We will also discuss the importance of oxygen availability not as a sole determinant, but in the context of the nutrient and cellular state, and address the emerging role of lactate as an energy source and signaling molecule in health and disease. Copyright © 2014 Elsevier B.V. and Mitochondria Research Society. All rights reserved.

  17. Chabazite and dolomite formation in a dolocrete profile: An example of a complex alkaline paragenesis in Lanzarote, Canary Islands

    NASA Astrophysics Data System (ADS)

    Alonso-Zarza, Ana M.; Bustamante, Leticia; Huerta, Pedro; Rodríguez-Berriguete, Álvaro; Huertas, María José

    2016-05-01

    This paper studies the weathering and soil formation processes operating on detrital sediments containing alkaline volcanic rock fragments of the Mirador del Río dolocrete profile. The profile consists of a lower horizon of removilised weathered basalts, an intermediate red sandy mudstones horizon with irregular carbonate layers and a topmost horizon of amalgamated carbonate layers with root traces. Formation occurred in arid to semiarid climates, giving place to a complex mineralogical association, including Mg-carbonates and chabazite, rarely described in cal/dolocretes profiles. Initial vadose weathering processes occurred in the basalts and in directly overlying detrital sediments, producing (Stage 1) red-smectites and dolomicrite. Dominant phreatic (Stage 2) conditions allowed precipitation of coarse-zoned dolomite and chabazite filling porosities. In Stages 3 and 4, mostly pedogenic, biogenic processes played an important role in dolomite and calcite accumulation in the profile. Overall evolution of the profile and its mineralogical association involved initial processes dominated by alteration of host rock, to provide silica and Mg-rich alkaline waters, suitable for chabazite and dolomite formation, without a previous carbonate phase. Dolomite formed both abiogenically and biogenically, but without a previous carbonate precursor and in the absence of evaporites. Dominance of calcite towards the profile top is the result of Mg/Ca decrease in the interstitial meteoric waters due to decreased supply of Mg from weathering, and increased supply of Ca in aeolian dust. Meteoric origin of the water is confirmed by C and O isotope values, which also indicate lack of deep sourced CO2. The dolocrete studied and its complex mineral association reveal the complex interactions that occur at surface during weathering and pedogenesis of basalt-sourced rocks.

  18. The evolution of the storm-time ring current in response to different characteristics of the plasma source

    NASA Astrophysics Data System (ADS)

    Lemon, C.; Chen, M.; O'Brien, T. P.; Toffoletto, F.; Sazykin, S.; Wolf, R.; Kumar, V.

    2006-12-01

    We present simulation results of the Rice Convection Model-Equilibrium (RCM-E) that test and compare the effect on the storm time ring current of varying the plasma sheet source population characteristics at 6.6 Re during magnetic storms. Previous work has shown that direct injection of ionospheric plasma into the ring current is not a significant source of ring current plasma, suggesting that the plasma sheet is the only source. However, storm time processes in the plasma sheet and inner magnetosphere are very complex, due in large part to the feedback interactions between the plasma distribution, magnetic field, and electric field. We are particularly interested in understanding the role of the plasma sheet entropy parameter (PV^{5/3}, where V=\\int ds/B) in determining the strength and distribution of the ring current in both the main and recovery phases of a storm. Plasma temperature and density can be measured from geosynchrorous orbiting satellites, and these are often used to provide boundary conditions for ring current simulations. However, magnetic field measurements in this region are less commonly available, and there is a relatively poor understanding of the interplay between the plasma and the magnetic field during magnetic storms. The entropy parameter is a quantity that incorporates both the plasma and the magnetic field, and understanding its role in the ring current injection and recovery is essential to describing the processes that are occuring during magnetic storms. The RCM-E includes the physics of feedback between the plasma and both the electric and magnetic fields, and is therefore a valuable tool for understanding these complex storm-time processes. By contrasting the effects of different plasma boundary conditions at geosynchronous orbit, we shed light on the physical processes involved in ring current injection and recovery.

  19. PDZ-containing proteins: alternative splicing as a source of functional diversity.

    PubMed

    Sierralta, Jimena; Mendoza, Carolina

    2004-12-01

    Scaffold proteins allow specific protein complexes to be assembled in particular regions of the cell at which they organize subcellular structures and signal transduction complexes. This characteristic is especially important for neurons, which are highly polarized cells. Among the domains contained by scaffold proteins, the PSD-95, Discs-large, ZO-1 (PDZ) domains are of particular relevance in signal transduction processes and maintenance of neuronal and epithelial polarity. These domains are specialized in the binding of the carboxyl termini of proteins allowing membrane proteins to be localized by the anchoring to the cytoskeleton mediated by PDZ-containing scaffold proteins. In vivo studies carried out in Drosophila have taught that the role of many scaffold proteins is not limited to a single process; thus, in many cases the same genes are expressed in different tissues and participate in apparently very diverse processes. In addition to the differential expression of interactors of scaffold proteins, the expression of variants of these molecular scaffolds as the result of the alternative processing of the genes that encode them is proving to be a very important source of variability and complexity on a main theme. Alternative splicing in the nervous system is well documented, where specific isoforms play roles in neurotransmission, ion channel function, neuronal cell recognition, and are developmentally regulated making it a major mechanism of functional diversity. Here we review the current state of knowledge about the diversity and the known function of PDZ-containing proteins in Drosophila with emphasis in the role played by alternatively processed forms in the diversity of functions attributed to this family of proteins.

  20. Consideration of Treatment Performance Assessment Metrics for a TCE Source Area Bioremediation (SABRe project)

    NASA Astrophysics Data System (ADS)

    Cai, Z.; Wilson, R. D.

    2009-05-01

    Techniques for optimizing the removal of NAPL mass in source zones have advanced at a more rapid rate than strategies to assess treatment performance. Informed selection of remediation approaches would be easier if measurements of performance were more directly transferable. We developed a number of methods based on data generated from multilevel sampler (MLS) transects to assess the effectiveness of a bioaugmentation/biostimulation trial in a TCE source residing in a terrace gravel aquifer in the East Midlands, UK. In this spatially complex aquifer, treatment inferred from long screen monitoring well data was not as reliable as that from consideration of mass flux changes across transects installed in and downgradient of the source. Falling head tests were conducted in the MLS ports to generate the necessary hydraulic conductivity (K) data. Combining K with concentration provides a mass flux map that allows calculation of mass turnover and an assessment of where in the complex geology the greatest turnover occurred. Five snapshots over a 600-day period indicate a marked reduction in TCE flux, suggesting a significant reduction in DNAPL mass over that expected due to natural processes. However, persistence of daughter products suggested that complete dechlorination did not occur. The MLS fence data also revealed that delivery of both carbon source and pH buffer were not uniform across the test zone. This may have lead to the generation of niches of iron(III) and sulphate reduction as well as methanogenesis, which impacted on dechlorination processes. In the absence of this spatial data, it is difficult to reconcile apparent treatment as indicated in monitoring well data to on-going processes.

  1. Auditory scene analysis in school-aged children with developmental language disorders

    PubMed Central

    Sussman, E.; Steinschneider, M.; Lee, W.; Lawson, K.

    2014-01-01

    Natural sound environments are dynamic, with overlapping acoustic input originating from simultaneously active sources. A key function of the auditory system is to integrate sensory inputs that belong together and segregate those that come from different sources. We hypothesized that this skill is impaired in individuals with phonological processing difficulties. There is considerable disagreement about whether phonological impairments observed in children with developmental language disorders can be attributed to specific linguistic deficits or to more general acoustic processing deficits. However, most tests of general auditory abilities have been conducted with a single set of sounds. We assessed the ability of school-aged children (7–15 years) to parse complex auditory non-speech input, and determined whether the presence of phonological processing impairments was associated with stream perception performance. A key finding was that children with language impairments did not show the same developmental trajectory for stream perception as typically developing children. In addition, children with language impairments required larger frequency separations between sounds to hear distinct streams compared to age-matched peers. Furthermore, phonological processing ability was a significant predictor of stream perception measures, but only in the older age groups. No such association was found in the youngest children. These results indicate that children with language impairments have difficulty parsing speech streams, or identifying individual sound events when there are competing sound sources. We conclude that language group differences may in part reflect fundamental maturational disparities in the analysis of complex auditory scenes. PMID:24548430

  2. GRASP/Ada 95: Reverse Engineering Tools for Ada

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1996-01-01

    The GRASP/Ada project (Graphical Representations of Algorithms, Structures, and Processes for Ada) has successfully created and prototyped an algorithmic level graphical representation for Ada software, the Control Structure Diagram (CSD), and a new visualization for a fine-grained complexity metric called the Complexity Profile Graph (CPG). By synchronizing the CSD and the CPG, the CSD view of control structure, nesting, and source code is directly linked to the corresponding visualization of statement level complexity in the CPG. GRASP has been integrated with GNAT, the GNU Ada 95 Translator to provide a comprehensive graphical user interface and development environment for Ada 95. The user may view, edit, print, and compile source code as a CSD with no discernible addition to storage or computational overhead. The primary impetus for creation of the CSD was to improve the comprehension efficiency of Ada software and, as a result, improve reliability and reduce costs. The emphasis has been on the automatic generation of the CSD from Ada 95 source code to support reverse engineering and maintenance. The CSD has the potential to replace traditional prettyprinted Ada source code. The current update has focused on the design and implementation of a new Motif compliant user interface, and a new CSD generator consisting of a tagger and renderer. The Complexity Profile Graph (CPG) is based on a set of functions that describes the context, content, and the scaling for complexity on a statement by statement basis. When combined graphicafly, the result is a composite profile of complexity for the program unit. Ongoing research includes the development and refinement of the associated functions, and the development of the CPG generator prototype. The current Version 5.0 prototype provides the capability for the user to generate CSDs and CPGs from Ada 95 source code in a reverse engineering as well as forward engineering mode with a level of flexibility suitable for practical application. This report provides an overview of the GRASP/Ada project with an emphasis on the current update.

  3. Symmetrical group theory for mathematical complexity reduction of digital holograms

    NASA Astrophysics Data System (ADS)

    Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.

    2017-10-01

    This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.

  4. Acoustic reciprocity: An extension to spherical harmonics domain.

    PubMed

    Samarasinghe, Prasanga; Abhayapala, Thushara D; Kellermann, Walter

    2017-10-01

    Acoustic reciprocity is a fundamental property of acoustic wavefields that is commonly used to simplify the measurement process of many practical applications. Traditionally, the reciprocity theorem is defined between a monopole point source and a point receiver. Intuitively, it must apply to more complex transducers than monopoles. In this paper, the authors formulate the acoustic reciprocity theory in the spherical harmonics domain for directional sources and directional receivers with higher order directivity patterns.

  5. A West Virginia case study: does erosion differ between streambanks clustered by the bank assessment of nonpoint source consequences of sediment (BANCS) model parameters?

    Treesearch

    Abby L. McQueen; Nicolas P. Zegre; Danny L. Welsch

    2013-01-01

    The integration of factors and processes responsible for streambank erosion is complex. To explore the influence of physical variables on streambank erosion, parameters for the bank assessment of nonpoint source consequences of sediment (BANCS) model were collected on a 1-km reach of Horseshoe Run in Tucker County, West Virginia. Cluster analysis was used to establish...

  6. Scale-Independent Relational Query Processing

    DTIC Science & Technology

    2013-10-04

    source options are also available, including Postgresql, MySQL , and SQLite. These mod- ern relational databases are generally very complex software systems...and Their Application to Data Stream Management. IGI Global, 2010. [68] George Reese. Database Programming with JDBC and Java , Second Edition. Ed. by

  7. Analysis of weather patterns associated with air quality degradation and potential health impacts

    EPA Science Inventory

    Emissions from anthropogenic and natural sources into the atmosphere are determined in large measure by prevailing weather conditions through complex physical, dynamical and chemical processes. Air pollution episodes are characterized by degradation in air quality as reflected by...

  8. Visualizing medium and biodistribution in complex cell culture bioreactors using in vivo imaging.

    PubMed

    Ratcliffe, E; Thomas, R J; Stacey, A J

    2014-01-01

    There is a dearth of technology and methods to aid process characterization, control and scale-up of complex culture platforms that provide niche micro-environments for some stem cell-based products. We have demonstrated a novel use of 3d in vivo imaging systems to visualize medium flow and cell distribution within a complex culture platform (hollow fiber bioreactor) to aid characterization of potential spatial heterogeneity and identify potential routes of bioreactor failure or sources of variability. This can then aid process characterization and control of such systems with a view to scale-up. Two potential sources of variation were observed with multiple bioreactors repeatedly imaged using two different imaging systems: shortcutting of medium between adjacent inlet and outlet ports with the potential to create medium gradients within the bioreactor, and localization of bioluminescent murine 4T1-luc2 cells upon inoculation with the potential to create variable seeding densities at different points within the cell growth chamber. The ability of the imaging technique to identify these key operational bioreactor characteristics demonstrates an emerging technique in troubleshooting and engineering optimization of bioreactor performance. © 2013 American Institute of Chemical Engineers.

  9. Research and Analysis on the Localization of a 3-D Single Source in Lossy Medium Using Uniform Circular Array

    PubMed Central

    Xue, Bing; Qu, Xiaodong; Fang, Guangyou; Ji, Yicai

    2017-01-01

    In this paper, the methods and analysis for estimating the location of a three-dimensional (3-D) single source buried in lossy medium are presented with uniform circular array (UCA). The mathematical model of the signal in the lossy medium is proposed. Using information in the covariance matrix obtained by the sensors’ outputs, equations of the source location (azimuth angle, elevation angle, and range) are obtained. Then, the phase and amplitude of the covariance matrix function are used to process the source localization in the lossy medium. By analyzing the characteristics of the proposed methods and the multiple signal classification (MUSIC) method, the computational complexity and the valid scope of these methods are given. From the results, whether the loss is known or not, we can choose the best method for processing the issues (localization in lossless medium or lossy medium). PMID:28574467

  10. Simulating variable source problems via post processing of individual particle tallies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.

    2000-10-20

    Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less

  11. A Tiered Approach to Evaluating Salinity Sources in Water at Oil and Gas Production Sites.

    PubMed

    Paquette, Shawn M; Molofsky, Lisa J; Connor, John A; Walker, Kenneth L; Hopkins, Harley; Chakraborty, Ayan

    2017-09-01

    A suspected increase in the salinity of fresh water resources can trigger a site investigation to identify the source(s) of salinity and the extent of any impacts. These investigations can be complicated by the presence of naturally elevated total dissolved solids or chlorides concentrations, multiple potential sources of salinity, and incomplete data and information on both naturally occurring conditions and the characteristics of potential sources. As a result, data evaluation techniques that are effective at one site may not be effective at another. In order to match the complexity of the evaluation effort to the complexity of the specific site, this paper presents a strategic tiered approach that utilizes established techniques for evaluating and identifying the source(s) of salinity in an efficient step-by-step manner. The tiered approach includes: (1) a simple screening process to evaluate whether an impact has occurred and if the source is readily apparent; (2) basic geochemical characterization of the impacted water resource(s) and potential salinity sources coupled with simple visual and statistical data evaluation methods to determine the source(s); and (3) advanced laboratory analyses (e.g., isotopes) and data evaluation methods to identify the source(s) and the extent of salinity impacts where it was not otherwise conclusive. A case study from the U.S. Gulf Coast is presented to illustrate the application of this tiered approach. © 2017, National Ground Water Association.

  12. In silico Interrogation of Insect Central Complex Suggests Computational Roles for the Ellipsoid Body in Spatial Navigation.

    PubMed

    Fiore, Vincenzo G; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank

    2017-01-01

    The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features identify central complex circuitry, and especially the ellipsoid body, as a key neural correlate involved in spatial navigation.

  13. In silico Interrogation of Insect Central Complex Suggests Computational Roles for the Ellipsoid Body in Spatial Navigation

    PubMed Central

    Fiore, Vincenzo G.; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank

    2017-01-01

    The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features identify central complex circuitry, and especially the ellipsoid body, as a key neural correlate involved in spatial navigation. PMID:28824390

  14. The 'F-complex' and MMN tap different aspects of deviance.

    PubMed

    Laufer, Ilan; Pratt, Hillel

    2005-02-01

    To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.

  15. WRF Improves Downscaled Precipitation During El Niño Events over Complex Terrain in Northern South America: Implications for Deforestation Studies

    NASA Astrophysics Data System (ADS)

    Rendón, A.; Posada, J. A.; Salazar, J. F.; Mejia, J.; Villegas, J.

    2016-12-01

    Precipitation in the complex terrain of the tropical Andes of South America can be strongly reduced during El Niño events, with impacts on numerous societally-relevant services, including hydropower generation, the main electricity source in Colombia. Simulating rainfall patterns and behavior in such areas of complex terrain has remained a challenge for regional climate models. Current data products such as ERA-Interim and other reanalysis and modelling products generally fail to correctly represent processes at scales that are relevant for these processes. Here we assess the added value to ERA-Interim by dynamical downscaling using the WRF regional climate model, including a comparison of different cumulus parameterization schemes. We found that WRF improves the representation of precipitation during the dry season of El Niño (DJF) events using a 1996-2014 observation period. Further, we use these improved capability to simulate an extreme deforestation scenario under El Niño conditions for an area in the central Andes of Colombia, where a big proportion of the country's hydropower is generated. Our results suggest that forests dampen the effects of El Niño on precipitation. In synthesis, our results illustrate the utility of regional modelling to improve data sources, as well as their potential for predicting the local-to-regional effects of global-change-type processes in regions with limited data availability.

  16. Using plant growth modeling to analyze C source–sink relations under drought: inter- and intraspecific comparison

    PubMed Central

    Pallas, Benoît; Clément-Vidal, Anne; Rebolledo, Maria-Camila; Soulié, Jean-Christophe; Luquet, Delphine

    2013-01-01

    The ability to assimilate C and allocate non-structural carbohydrates (NSCs) to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm) were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyze such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed. PMID:24204372

  17. Investigation of chemical vapor deposition of garnet films for bubble domain memories

    NASA Technical Reports Server (NTRS)

    Besser, P. J.; Hamilton, T. N.

    1973-01-01

    The important process parameters and control required to grow reproducible device quality ferrimagnetic films by chemical vapor deposition (CVD) were studied. The investigation of the critical parameters in the CVD growth process led to the conclusion that the required reproducibility of film properties cannot be achieved with individually controlled separate metal halide sources. Therefore, the CVD growth effort was directed toward replacement of the halide sources with metallic sources with the ultimate goal being the reproducible growth of complex garnet compositions utilizing a single metal alloy source. The characterization of the YGdGaIG films showed that certain characteristics of this material, primarily the low domain wall energy and the large temperature sensitivity, severely limited its potential as a useful material for bubble domain devices. Consequently, at the time of the change from halide to metallic sources, the target film compositions were shifted to more useful materials such as YGdTmGaIG, YEuGaIG and YSmGaIG.

  18. Numerical study of plasma generation process and internal antenna heat loadings in J-PARC RF negative ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shibata, T., E-mail: shibat@post.j-parc.jp; Ueno, A.; Oguri, H.

    A numerical model of plasma transport and electromagnetic field in the J-PARC (Japan Proton Accelerator Research Complex) radio frequency ion source has been developed to understand the relation between antenna coil heat loadings and plasma production/transport processes. From the calculation, the local plasma density increase is observed in the region close to the antenna coil. Electrons are magnetized by the magnetic field line with absolute magnetic flux density 30–120 Gauss which leads to high local ionization rate. The results suggest that modification of magnetic configuration can be made to reduce plasma heat flux onto the antenna.

  19. Resource potential for commodities in addition to Uranium in sandstone-hosted deposits: Chapter 13

    USGS Publications Warehouse

    Breit, George N.

    2016-01-01

    Sandstone-hosted deposits mined primarily for their uranium content also have been a source of vanadium and modest amounts of copper. Processing of these ores has also recovered small amounts of molybdenum, rhenium, rare earth elements, scandium, and selenium. These deposits share a generally common origin, but variations in the source of metals, composition of ore-forming solutions, and geologic history result in complex variability in deposit composition. This heterogeneity is evident regionally within the same host rock, as well as within districts. Future recovery of elements associated with uranium in these deposits will be strongly dependent on mining and ore-processing methods.

  20. 3-D interactive visualisation tools for Hi spectral line imaging

    NASA Astrophysics Data System (ADS)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2017-06-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.

  1. Semi-Targeted Analysis of Complex Matrices by ESI FT-ICR MS or How an Experimental Bias may be Used as an Analytical Tool.

    PubMed

    Hertzog, Jasmine; Carré, Vincent; Dufour, Anthony; Aubriet, Frédéric

    2018-03-01

    Ammonia is well suited to favor deprotonation process in electrospray ionization mass spectrometry (ESI-MS) to increase the formation of [M - H] - . Nevertheless, NH 3 may react with carbonyl compounds (aldehyde, ketone) and bias the composition description of the investigated sample. This is of significant importance in the study of complex mixture such as oil or bio-oil. To assess the ability of primary amines to form imines with carbonyl compounds during the ESI-MS process, two aldehydes (vanillin and cinnamaldehyde) and two ketones (butyrophenone and trihydroxyacetophenone) have been infused in an ESI source with ammonia and two different amines (aniline and 3-chloronaniline). The (+) ESI-MS analyses have demonstrated the formation of imine whatever the considered carbonyl compound and the used primary amine, the structure of which was extensively studied by tandem mass spectrometry. Thus, it has been established that the addition of ammonia, in the solution infused in an ESI source, may alter the composition description of a complex mixture and leads to misinterpretations due to the formation of imines. Nevertheless, this experimental bias can be used to identify the carbonyl compounds in a pyrolysis bio-oil. As we demonstrated, infusion of the bio-oil with 3-chloroaniline in ESI source leads to specifically derivatized carbonyl compounds. Thanks to their chlorine isotopic pattern and the high mass measurement accuracy, (+) ESI Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) unambiguously highlighted them from the numerous C x H y O z bio-oil components. These results offer a new perspective into the detailed molecular structure of complex mixtures such as bio-oils. Graphical Abstract ᅟ.

  2. Semi-Targeted Analysis of Complex Matrices by ESI FT-ICR MS or How an Experimental Bias may be Used as an Analytical Tool

    NASA Astrophysics Data System (ADS)

    Hertzog, Jasmine; Carré, Vincent; Dufour, Anthony; Aubriet, Frédéric

    2018-03-01

    Ammonia is well suited to favor deprotonation process in electrospray ionization mass spectrometry (ESI-MS) to increase the formation of [M - H]-. Nevertheless, NH3 may react with carbonyl compounds (aldehyde, ketone) and bias the composition description of the investigated sample. This is of significant importance in the study of complex mixture such as oil or bio-oil. To assess the ability of primary amines to form imines with carbonyl compounds during the ESI-MS process, two aldehydes (vanillin and cinnamaldehyde) and two ketones (butyrophenone and trihydroxyacetophenone) have been infused in an ESI source with ammonia and two different amines (aniline and 3-chloronaniline). The (+) ESI-MS analyses have demonstrated the formation of imine whatever the considered carbonyl compound and the used primary amine, the structure of which was extensively studied by tandem mass spectrometry. Thus, it has been established that the addition of ammonia, in the solution infused in an ESI source, may alter the composition description of a complex mixture and leads to misinterpretations due to the formation of imines. Nevertheless, this experimental bias can be used to identify the carbonyl compounds in a pyrolysis bio-oil. As we demonstrated, infusion of the bio-oil with 3-chloroaniline in ESI source leads to specifically derivatized carbonyl compounds. Thanks to their chlorine isotopic pattern and the high mass measurement accuracy, (+) ESI Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) unambiguously highlighted them from the numerous CxHyOz bio-oil components. These results offer a new perspective into the detailed molecular structure of complex mixtures such as bio-oils. [Figure not available: see fulltext.

  3. Powder Bed Layer Characteristics: The Overseen First-Order Process Input

    NASA Astrophysics Data System (ADS)

    Mindt, H. W.; Megahed, M.; Lavery, N. P.; Holmes, M. A.; Brown, S. G. R.

    2016-08-01

    Powder Bed Additive Manufacturing offers unique advantages in terms of manufacturing cost, lot size, and product complexity compared to traditional processes such as casting, where a minimum lot size is mandatory to achieve economic competitiveness. Many studies—both experimental and numerical—are dedicated to the analysis of how process parameters such as heat source power, scan speed, and scan strategy affect the final material properties. Apart from the general urge to increase the build rate using thicker powder layers, the coating process and how the powder is distributed on the processing table has received very little attention to date. This paper focuses on the first step of every powder bed build process: Coating the process table. A numerical study is performed to investigate how powder is transferred from the source to the processing table. A solid coating blade is modeled to spread commercial Ti-6Al-4V powder. The resulting powder layer is analyzed statistically to determine the packing density and its variation across the processing table. The results are compared with literature reports using the so-called "rain" models. A parameter study is performed to identify the influence of process table displacement and wiper velocity on the powder distribution. The achieved packing density and how that affects subsequent heat source interaction with the powder bed is also investigated numerically.

  4. Rupture Dynamics and Ground Motion from Earthquakes on Rough Faults in Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Bydlon, S. A.; Kozdon, J. E.; Duru, K.; Dunham, E. M.

    2013-12-01

    Heterogeneities in the material properties of Earth's crust scatter propagating seismic waves. The effects of scattered waves are reflected in the seismic coda and depend on the amplitude of the heterogeneities, spatial arrangement, and distance from source to receiver. In the vicinity of the fault, scattered waves influence the rupture process by introducing fluctuations in the stresses driving propagating ruptures. Further variability in the rupture process is introduced by naturally occurring geometric complexity of fault surfaces, and the stress changes that accompany slip on rough surfaces. Our goal is to better understand the origin of complexity in the earthquake source process, and to quantify the relative importance of source complexity and scattering along the propagation path in causing incoherence of high frequency ground motion. Using a 2D high order finite difference rupture dynamics code, we nucleate ruptures on either flat or rough faults that obey strongly rate-weakening friction laws. These faults are embedded in domains with spatially varying material properties characterized by Von Karman autocorrelation functions and their associated power spectral density functions, with variations in wave speed of approximately 5 to 10%. Flat fault simulations demonstrate that off-fault material heterogeneity, at least with this particular form and amplitude, has only a minor influence on the rupture process (i.e., fluctuations in slip and rupture velocity). In contrast, ruptures histories on rough faults in both homogeneous and heterogeneous media include much larger short-wavelength fluctuations in slip and rupture velocity. We therefore conclude that source complexity is dominantly influenced by fault geometric complexity. To examine contributions of scattering versus fault geometry on ground motions, we compute spatially averaged root-mean-square (RMS) acceleration values as a function of fault perpendicular distance for a homogeneous medium and several heterogeneous media characterized by different statistical properties. We find that at distances less than ~6 km from the fault, RMS acceleration values from simulations with homogeneous and heterogeneous media are similar, but at greater distances the RMS values associated with heterogeneous media are larger than those associated with homogeneous media. The magnitude of this divergence increases with the amplitude of the heterogeneities. For instance, for a heterogeneous medium with a 10% standard deviation in material property values relative to mean values, RMS accelerations are ~50% larger than for a homogeneous medium at distances greater than 6 km. This finding is attributed to the scattering of coherent pulses into multiple pulses of decreased amplitude that subsequently arrive at later times. In order to understand the robustness of these results, an extension of our dynamic rupture and wave propagation code to 3D is underway.

  5. Self-sustained vibrations in volcanic areas extracted by Independent Component Analysis: a review and new results

    NASA Astrophysics Data System (ADS)

    de Lauro, E.; de Martino, S.; Falanga, M.; Palo, M.

    2011-12-01

    We investigate the physical processes associated with volcanic tremor and explosions. A volcano is a complex system where a fluid source interacts with the solid edifice so generating seismic waves in a regime of low turbulence. Although the complex behavior escapes a simple universal description, the phases of activity generate stable (self-sustained) oscillations that can be described as a non-linear dynamical system of low dimensionality. So, the system requires to be investigated with non-linear methods able to individuate, decompose, and extract the main characteristics of the phenomenon. Independent Component Analysis (ICA), an entropy-based technique is a good candidate for this purpose. Here, we review the results of ICA applied to seismic signals acquired in some volcanic areas. We emphasize analogies and differences among the self-oscillations individuated in three cases: Stromboli (Italy), Erebus (Antarctica) and Volcán de Colima (Mexico). The waveforms of the extracted independent components are specific for each volcano, whereas the similarity can be ascribed to a very general common source mechanism involving the interaction between gas/magma flow and solid structures (the volcanic edifice). Indeed, chocking phenomena or inhomogeneities in the volcanic cavity can play the same role in generating self-oscillations as the languid and the reed do in musical instruments. The understanding of these background oscillations is relevant not only for explaining the volcanic source process and to make a forecast into the future, but sheds light on the physics of complex systems developing low turbulence.

  6. Methodological aspects of fuel performance system analysis at raw hydrocarbon processing plants

    NASA Astrophysics Data System (ADS)

    Kulbjakina, A. V.; Dolotovskij, I. V.

    2018-01-01

    The article discusses the methodological aspects of fuel performance system analysis at raw hydrocarbon (RH) processing plants. Modern RH processing facilities are the major consumers of energy resources (ER) for their own needs. To reduce ER, including fuel consumption, and to develop rational fuel system structure are complex and relevant scientific tasks that can only be done using system analysis and complex system synthesis. In accordance with the principles of system analysis, the hierarchical structure of the fuel system, the block scheme for the synthesis of the most efficient alternative of the fuel system using mathematical models and the set of performance criteria have been developed on the main stages of the study. The results from the introduction of specific engineering solutions to develop their own energy supply sources for RH processing facilities have been provided.

  7. Towards an optimal adaptation of exposure to NOAA assessment methodology in Multi-Source Industrial Scenarios (MSIS): the challenges and the decision-making process

    NASA Astrophysics Data System (ADS)

    López de Ipiña, JM; Vaquero, C.; Gutierrez-Cañas, C.

    2017-06-01

    It is expected a progressive increase of the industrial processes that manufacture of intermediate (iNEPs) and end products incorporating ENMs (eNEPs) to bring about improved properties. Therefore, the assessment of occupational exposure to airborne NOAA will migrate, from the simple and well-controlled exposure scenarios in research laboratories and ENMs production plants using innovative production technologies, to much more complex exposure scenarios located around processes of manufacture of eNEPs that, in many cases, will be modified conventional production processes. Here will be discussed some of the typical challenging situations in the process of risk assessment of inhalation exposure to NOAA in Multi-Source Industrial Scenarios (MSIS), from the basis of the lessons learned when confronted to those scenarios in the frame of some European and Spanish research projects.

  8. Chromium complexes for luminescence, solar cells, photoredox catalysis, upconversion, and phototriggered NO release

    PubMed Central

    Büldt, Laura A.

    2017-01-01

    Some complexes of Cr(iii) and Cr(0) have long been known to exhibit interesting photophysical and photochemical properties, but in the past few years important conceptual progress was made. This Perspective focuses on the recent developments of Cr(iii) complexes as luminophores and dyes for solar cells, their application in photoredox catalysis, their use as sensitizers in upconversion processes, and their performance as photochemical nitric oxide sources. The example of a luminescent Cr(0) isocyanide complex illustrates the possibility of obtaining photoactive analogues of d6 metal complexes that are commonly made from precious metals such as Ru(ii) or Ir(iii). The studies highlighted herein illustrate the favorable excited-state properties of robust first-row transition metal complexes with broad application potential. PMID:29163886

  9. Chromium complexes for luminescence, solar cells, photoredox catalysis, upconversion, and phototriggered NO release.

    PubMed

    Büldt, Laura A; Wenger, Oliver S

    2017-11-01

    Some complexes of Cr(iii) and Cr(0) have long been known to exhibit interesting photophysical and photochemical properties, but in the past few years important conceptual progress was made. This Perspective focuses on the recent developments of Cr(iii) complexes as luminophores and dyes for solar cells, their application in photoredox catalysis, their use as sensitizers in upconversion processes, and their performance as photochemical nitric oxide sources. The example of a luminescent Cr(0) isocyanide complex illustrates the possibility of obtaining photoactive analogues of d 6 metal complexes that are commonly made from precious metals such as Ru(ii) or Ir(iii). The studies highlighted herein illustrate the favorable excited-state properties of robust first-row transition metal complexes with broad application potential.

  10. Analysis of Summer-Time Ozone and Precursor Species in the Southeast United States

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew

    2016-01-01

    Ozone (O3) is a greenhouse gas and toxic pollutant which plays a major role in air quality and atmospheric chemistry. The understanding and ability to model the horizontal and vertical structure of O3 mixing ratios is difficult due to the complex formation/destruction processes and transport pathways that cause large variability of O3. The Environmental Protection Agency has National Ambient Air Quality Standards for O3 set at 75 ppb with future standards proposed to be as low as 65 ppb. These lower values emphasize the need to better understand/simulate the transport processes, emission sources, and chemical processes controlling precursor species (e.g., NOx, VOCs, and CO) which influence O3 mixing ratios. The uncertainty of these controlling variables is particularly large in the southeast United States (US) which is a region impacted by multiple different emission sources of precursor species (anthropogenic and biogenic) and transport processes resulting in complex spatio-temporal O3 patterns. During this work we will evaluate O3 and precursor species in the southeast US applying models, ground-based and airborne in situ data, and lidar observations. In the summer of 2013, the UAH O3 Differential Absorption Lidar (DIAL) (part of the Tropospheric Ozone Lidar Network (TOLNet)) measured vertical O3 profiles from the surface up to approximately 12 km. During this period, the lidar observed numerous periods of dynamic temporal and vertical O3 structures. In order to determine the sources/processes impacting these O3 mixing ratios we will apply the CTM GEOS-Chem (v9-02) at a 0.25 deg x 0.3125 deg resolution. Using in situ ground-based (e.g., SEARCH Network, CASTNET), airborne (e.g., NOAA WP-3D - SENEX 2013, DC-8 - SEAC4RS), and TOLNet lidar data we will first evaluate the model to determine the capability of GEOS-Chem to simulate the spatio-temporal variability of O3 in the southeast US. Secondly, we will perform model sensitivity studies in order to quantify which emission sources (e.g., anthropogenic, biogenic, lighting, wildfire) and transport processes (e.g., stratospheric, long-range, local scale) are contributing to these TOLNet-observed dynamic O3 patterns. Results from the evaluation of the model and the study of sources/processes impacting observed O3 mixing ratios will be presented.

  11. Analysis of Summer-time Ozone and Precursor Species in the Southeast United States

    NASA Astrophysics Data System (ADS)

    Johnson, M. S.; Kuang, S.; Newchurch, M.; Hair, J. W.

    2015-12-01

    Ozone (O3) is a greenhouse gas and toxic pollutant which plays a major role in air quality and atmospheric chemistry. The understanding and ability to model the horizontal and vertical structure of O3 mixing ratios is difficult due to the complex formation/destruction processes and transport pathways that cause large variability of O3. The Environmental Protection Agency has National Ambient Air Quality Standards for O3 set at 75 ppb with future standards proposed to be as low as 65 ppb. These lower values emphasize the need to better understand/simulate the transport processes, emission sources, and chemical processes controlling precursor species (e.g., NOx, VOCs, and CO) which influence O3 mixing ratios. The uncertainty of these controlling variables is particularly large in the southeast United States (US) which is a region impacted by multiple different emission sources of precursor species (anthropogenic and biogenic) and transport processes resulting in complex spatio-temporal O3 patterns. During this work we will evaluate O3 and precursor species in the southeast US applying models, ground-based and airborne in situ data, and lidar observations. In the summer of 2013, the UAH O3 Differential Absorption Lidar (DIAL) (part of the Tropospheric Ozone Lidar Network (TOLNet)) measured vertical O3 profiles from the surface up to ~12 km. During this period, the lidar observed numerous periods of dynamic temporal and vertical O3 structures. In order to determine the sources/processes impacting these O3 mixing ratios we will apply the CTM GEOS-Chem (v9-02) at a 0.25° × 0.3125° resolution. Using in situ ground-based (e.g., SEARCH Network, CASTNET), airborne (e.g., NOAA WP-3D - SENEX 2013, DC-8 - SEAC4RS), and TOLNet lidar data we will first evaluate the model to determine the capability of GEOS-Chem to simulate the spatio-temporal variability of O3 in the southeast US. Secondly, we will perform model sensitivity studies in order to quantify which emission sources (e.g., anthropogenic, biogenic, lighting, wildfire) and transport processes (e.g., stratospheric, long-range, local scale) are contributing to these TOLNet-observed dynamic O3 patterns. Results from the evaluation of the model and the study of sources/processes impacting observed O3 mixing ratios will be presented.

  12. Wood smoke particle sequesters cell iron to impact a biological effect.

    EPA Science Inventory

    The biological effect of an inorganic particle (i.e., silica) can be associated with a disruption in cell iron homeostasis. Organic compounds included in particles originating from combustion processes can also complex sources of host cell iron to disrupt metal homeostasis. We te...

  13. Mud, Macrofauna and Microbes: An ode to benthic organism-abiotic interactions at varying scales

    EPA Science Inventory

    Benthic environments are dynamic habitats, subject to variable sources and rates of sediment delivery, reworking from the abiotic and biotic processes, and complex biogeochemistry. These activities do not occur in a vacuum, and interact synergistically to influence food webs, bi...

  14. Sources, Ages, and Alteration of Organic Matter in Estuaries.

    PubMed

    Canuel, Elizabeth A; Hardison, Amber K

    2016-01-01

    Understanding the processes influencing the sources and fate of organic matter (OM) in estuaries is important for quantifying the contributions of carbon from land and rivers to the global carbon budget of the coastal ocean. Estuaries are sites of high OM production and processing, and understanding biogeochemical processes within these regions is key to quantifying organic carbon (Corg) budgets at the land-ocean margin. These regions provide vital ecological services, including nutrient filtration and protection from floods and storm surge, and provide habitat and nursery areas for numerous commercially important species. Human activities have modified estuarine systems over time, resulting in changes in the production, respiration, burial, and export of Corg. Corg in estuaries is derived from aquatic, terrigenous, and anthropogenic sources, with each source exhibiting a spectrum of ages and lability. The complex source and age characteristics of Corg in estuaries complicate our ability to trace OM along the river-estuary-coastal ocean continuum. This review focuses on the application of organic biomarkers and compound-specific isotope analyses to estuarine environments and on how these tools have enhanced our ability to discern natural sources of OM, trace their incorporation into food webs, and enhance understanding of the fate of Corg within estuaries and their adjacent waters.

  15. SOLVENT EXTRACTION PROCESS FOR URANIUM FROM CHLORIDE SOLUTIONS

    DOEpatents

    Blake, C.A. Jr.; Brown, K.B.; Horner, D.E.

    1960-05-24

    An improvement was made in a uranium extraction process wherein the organic extractant is a phosphine oxide. An aqueous solution containing phosphate ions or sulfate ions together with uranium is provided with a source of chloride ions during the extraction step. The presence of the chloride ions enables a phosphine oxide to extract uranium in the presence of strong uranium- complexing ions such as phosphate or sulfate ions.

  16. Electrochemical process and production of novel complex hydrides

    DOEpatents

    Zidan, Ragaiy

    2013-06-25

    A process of using an electrochemical cell to generate aluminum hydride (AlH.sub.3) is provided. The electrolytic cell uses a polar solvent to solubilize NaAlH.sub.4. The resulting electrochemical process results in the formation of AlH.sub.3. The AlH.sub.3 can be recovered and used as a source of hydrogen for the automotive industry. The resulting spent aluminum can be regenerated into NaAlH.sub.4 as part of a closed loop process of AlH.sub.3 generation.

  17. Femtosecond crystallography with ultrabright electrons and x-rays: capturing chemistry in action.

    PubMed

    Miller, R J Dwayne

    2014-03-07

    With the recent advances in ultrabright electron and x-ray sources, it is now possible to extend crystallography to the femtosecond time domain to literally light up atomic motions involved in the primary processes governing structural transitions. This review chronicles the development of brighter and brighter electron and x-ray sources that have enabled atomic resolution to structural dynamics for increasingly complex systems. The primary focus is on achieving sufficient brightness using pump-probe protocols to resolve the far-from-equilibrium motions directing chemical processes that in general lead to irreversible changes in samples. Given the central importance of structural transitions to conceptualizing chemistry, this emerging field has the potential to significantly improve our understanding of chemistry and its connection to driving biological processes.

  18. Machine learning reveals cyclic changes in seismic source spectra in Geysers geothermal field.

    PubMed

    Holtzman, Benjamin K; Paté, Arthur; Paisley, John; Waldhauser, Felix; Repetto, Douglas

    2018-05-01

    The earthquake rupture process comprises complex interactions of stress, fracture, and frictional properties. New machine learning methods demonstrate great potential to reveal patterns in time-dependent spectral properties of seismic signals and enable identification of changes in faulting processes. Clustering of 46,000 earthquakes of 0.3 < M L < 1.5 from the Geysers geothermal field (CA) yields groupings that have no reservoir-scale spatial patterns but clear temporal patterns. Events with similar spectral properties repeat on annual cycles within each cluster and track changes in the water injection rates into the Geysers reservoir, indicating that changes in acoustic properties and faulting processes accompany changes in thermomechanical state. The methods open new means to identify and characterize subtle changes in seismic source properties, with applications to tectonic and geothermal seismicity.

  19. Technical Report on Modeling for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLoughlin, K.

    2016-01-11

    The overall aim of this project is to develop a software package, called MetaQuant, that can determine the constituents of a complex microbial sample and estimate their relative abundances by analysis of metagenomic sequencing data. The goal for Task 1 is to create a generative model describing the stochastic process underlying the creation of sequence read pairs in the data set. The stages in this generative process include the selection of a source genome sequence for each read pair, with probability dependent on its abundance in the sample. The other stages describe the evolution of the source genome from itsmore » nearest common ancestor with a reference genome, breakage of the source DNA into short fragments, and the errors in sequencing the ends of the fragments to produce read pairs.« less

  20. Developments and advances concerning the hyperpolarisation technique SABRE.

    PubMed

    Mewis, Ryan E

    2015-10-01

    To overcome the inherent sensitivity issue in NMR and MRI, hyperpolarisation techniques are used. Signal Amplification By Reversible Exchange (SABRE) is a hyperpolarisation technique that utilises parahydrogen, a molecule that possesses a nuclear singlet state, as the source of polarisation. A metal complex is required to break the singlet order of parahydrogen and, by doing so, facilitates polarisation transfer to analyte molecules ligated to the same complex through the J-coupled network that exists. The increased signal intensities that the analyte molecules possess as a result of this process have led to investigations whereby their potential as MRI contrast agents has been probed and to understand the fundamental processes underpinning the polarisation transfer mechanism. As well as discussing literature relevant to both of these areas, the chemical structure of the complex, the physical constraints of the polarisation transfer process and the successes of implementing SABRE at low and high magnetic fields are discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Simulating Complex, Cold-region Process Interactions Using a Multi-scale, Variable-complexity Hydrological Model

    NASA Astrophysics Data System (ADS)

    Marsh, C.; Pomeroy, J. W.; Wheater, H. S.

    2017-12-01

    Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.

  2. Are Changing Emission Patterns Across the Northern Hemisphere Influencing Long-range Transport Contributions to Background Air Pollution?

    EPA Science Inventory

    Air pollution reduction strategies for a region are complicated not only by the interplay of local emissions sources and several complex physical, chemical, dynamical processes in the atmosphere, but also hemispheric background levels of pollutants. Contrasting changes in emissio...

  3. Natural regeneration of eastern hemlock: a review

    Treesearch

    Daniel L. Goerlich; Ralph D. Nyland

    2000-01-01

    Successful regeneration of eastern hemlock involves a complex biophysical process that commonly spans many years. Critical factors include a reliable source of seed, a suitable seedbed, a partially shaded environment, and several years of favorable moisture. Surface scarification appears critical as a means of site preparation. Even then, young hemlocks grow slowly,...

  4. DASEES: A decision analysis tool with Bayesian networks from the Environmental Protection Agency’s Sustainable and Healthy Communities Research Program

    EPA Science Inventory

    Tackling environmental, economic, and social sustainability issues with community stakeholders will often lead to choices that are costly, complex and uncertain. A formal process with proper guidance is needed to understand the issues, identify sources of disagreement, consider t...

  5. Multiscale Metabolic Modeling: Dynamic Flux Balance Analysis on a Whole-Plant Scale1[W][OPEN

    PubMed Central

    Grafahrend-Belau, Eva; Junker, Astrid; Eschenröder, André; Müller, Johannes; Schreiber, Falk; Junker, Björn H.

    2013-01-01

    Plant metabolism is characterized by a unique complexity on the cellular, tissue, and organ levels. On a whole-plant scale, changing source and sink relations accompanying plant development add another level of complexity to metabolism. With the aim of achieving a spatiotemporal resolution of source-sink interactions in crop plant metabolism, a multiscale metabolic modeling (MMM) approach was applied that integrates static organ-specific models with a whole-plant dynamic model. Allowing for a dynamic flux balance analysis on a whole-plant scale, the MMM approach was used to decipher the metabolic behavior of source and sink organs during the generative phase of the barley (Hordeum vulgare) plant. It reveals a sink-to-source shift of the barley stem caused by the senescence-related decrease in leaf source capacity, which is not sufficient to meet the nutrient requirements of sink organs such as the growing seed. The MMM platform represents a novel approach for the in silico analysis of metabolism on a whole-plant level, allowing for a systemic, spatiotemporally resolved understanding of metabolic processes involved in carbon partitioning, thus providing a novel tool for studying yield stability and crop improvement. PMID:23926077

  6. Ultrasonic Time Reversal Mirrors

    NASA Astrophysics Data System (ADS)

    Fink, Mathias; Montaldo, Gabriel; Tanter, Mickael

    2004-11-01

    For more than ten years, time reversal techniques have been developed in many different fields of applications including detection of defects in solids, underwater acoustics, room acoustics and also ultrasound medical imaging and therapy. The essential property that makes time reversed acoustics possible is that the underlying physical process of wave propagation would be unchanged if time were reversed. In a non dissipative medium, the equations governing the waves guarantee that for every burst of sound that diverges from a source there exists in theory a set of waves that would precisely retrace the path of the sound back to the source. If the source is pointlike, this allows focusing back on the source whatever the medium complexity. For this reason, time reversal represents a very powerful adaptive focusing technique for complex media. The generation of this reconverging wave can be achieved by using Time Reversal Mirrors (TRM). It is made of arrays of ultrasonic reversible piezoelectric transducers that can record the wavefield coming from the sources and send back its time-reversed version in the medium. It relies on the use of fully programmable multi-channel electronics. In this paper we present some applications of iterative time reversal mirrors to target detection in medical applications.

  7. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  8. IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java

    PubMed Central

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319

  9. IQM: an extensible and portable open source application for image and signal analysis in Java.

    PubMed

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.

  10. Single liquid source plasma-enhanced metalorganic chemical vapor deposition of high-quality YBa2Cu3O(7-x) thin films

    NASA Technical Reports Server (NTRS)

    Zhang, Jiming; Gardiner, Robin A.; Kirlin, Peter S.; Boerstler, Robert W.; Steinbeck, John

    1992-01-01

    High quality YBa2Cu3O(7-x) films were grown in-situ on LaAlO3 (100) by a novel single liquid source plasma-enhanced metalorganic chemical vapor deposition process. The metalorganic complexes M(thd) (sub n), (thd = 2,2,6,6-tetramethyl-3,5-heptanedionate; M = Y, Ba, Cu) were dissolved in an organic solution and injected into a vaporizer immediately upstream of the reactor inlet. The single liquid source technique dramatically simplifies current CVD processing and can significantly improve the process reproducibility. X-ray diffraction measurements indicated that single phase, highly c-axis oriented YBa2Cu3O(7-x) was formed in-situ at substrate temperature 680 C. The as-deposited films exhibited a mirror-like surface, had transition temperature T(sub cO) approximately equal to 89 K, Delta T(sub c) less than 1 K, and Jc (77 K) = 10(exp 6) A/sq cm.

  11. Mass media in health promotion: an analysis using an extended information-processing model.

    PubMed

    Flay, B R; DiTecco, D; Schlegel, R P

    1980-01-01

    The information-processing model of the attitude and behavior change process was critically examined and extended from six to 12 levels for a better analysis of change due to mass media campaigns. Findings from social psychology and communications research, and from evaluations of mass media health promotion programs, were reviewed to determine how source, message, channel, receiver, and destination variables affect each of the levels of change of major interest (knowledge, beliefs, attitudes, intentions and behavior). Factors found to most likely induce permanent attitude and behavior change (most important in health promotion) were: presentation and repetition over long time periods, via multiple sources, at different times (including "prime" or high-exposure times), by multiple sources, in novel and involving ways, with appeals to multiple motives, development of social support, and provisions of appropriate behavioral skills, alternatives, and reinforcement (preferably in ways that get the active participation of the audience). Suggestions for evaluation of mass media programs that take account of this complexity were advanced.

  12. The Limited Role of Number of Nested Syntactic Dependencies in Accounting for Processing Cost: Evidence from German Simplex and Complex Verbal Clusters

    PubMed Central

    Bader, Markus

    2018-01-01

    This paper presents three acceptability experiments investigating German verb-final clauses in order to explore possible sources of sentence complexity during human parsing. The point of departure was De Vries et al.'s (2011) generalization that sentences with three or more crossed or nested dependencies are too complex for being processed by the human parsing mechanism without difficulties. This generalization is partially based on findings from Bach et al. (1986) concerning the acceptability of complex verb clusters in German and Dutch. The first experiment tests this generalization by comparing two sentence types: (i) sentences with three nested dependencies within a single clause that contains three verbs in a complex verb cluster; (ii) sentences with four nested dependencies distributed across two embedded clauses, one center-embedded within the other, each containing a two-verb cluster. The results show that sentences with four nested dependencies are judged as acceptable as control sentences with only two nested dependencies, whereas sentences with three nested dependencies are judged as only marginally acceptable. This argues against De Vries et al.'s (2011) claim that the human parser can process no more than two nested dependencies. The results are used to refine the Verb-Cluster Complexity Hypothesis of Bader and Schmid (2009a). The second and the third experiment investigate sentences with four nested dependencies in more detail in order to explore alternative sources of sentence complexity: the number of predicted heads to be held in working memory (storage cost in terms of the Dependency Locality Theory [DLT], Gibson, 2000) and the length of the involved dependencies (integration cost in terms of the DLT). Experiment 2 investigates sentences for which storage cost and integration cost make conflicting predictions. The results show that storage cost outweighs integration cost. Experiment 3 shows that increasing integration cost in sentences with two degrees of center embedding leads to decreased acceptability. Taken together, the results argue in favor of a multifactorial account of the limitations on center embedding in natural languages. PMID:29410633

  13. The Limited Role of Number of Nested Syntactic Dependencies in Accounting for Processing Cost: Evidence from German Simplex and Complex Verbal Clusters.

    PubMed

    Bader, Markus

    2017-01-01

    This paper presents three acceptability experiments investigating German verb-final clauses in order to explore possible sources of sentence complexity during human parsing. The point of departure was De Vries et al.'s (2011) generalization that sentences with three or more crossed or nested dependencies are too complex for being processed by the human parsing mechanism without difficulties. This generalization is partially based on findings from Bach et al. (1986) concerning the acceptability of complex verb clusters in German and Dutch. The first experiment tests this generalization by comparing two sentence types: (i) sentences with three nested dependencies within a single clause that contains three verbs in a complex verb cluster; (ii) sentences with four nested dependencies distributed across two embedded clauses, one center-embedded within the other, each containing a two-verb cluster. The results show that sentences with four nested dependencies are judged as acceptable as control sentences with only two nested dependencies, whereas sentences with three nested dependencies are judged as only marginally acceptable. This argues against De Vries et al.'s (2011) claim that the human parser can process no more than two nested dependencies. The results are used to refine the Verb-Cluster Complexity Hypothesis of Bader and Schmid (2009a). The second and the third experiment investigate sentences with four nested dependencies in more detail in order to explore alternative sources of sentence complexity: the number of predicted heads to be held in working memory (storage cost in terms of the Dependency Locality Theory [DLT], Gibson, 2000) and the length of the involved dependencies (integration cost in terms of the DLT). Experiment 2 investigates sentences for which storage cost and integration cost make conflicting predictions. The results show that storage cost outweighs integration cost. Experiment 3 shows that increasing integration cost in sentences with two degrees of center embedding leads to decreased acceptability. Taken together, the results argue in favor of a multifactorial account of the limitations on center embedding in natural languages.

  14. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  15. Thermochemolysis: A New Sample Preparation Approach for the Detection of Organic Components of Complex Macromolecules in Mars Rocks via Gas Chromatography Mass Spectrometry in SAM on MSL

    NASA Technical Reports Server (NTRS)

    Eugenbrode, J.; Glavin, D.; Dworkin, J.; Conrad, P.; Mahaffy, P.

    2011-01-01

    Organic chemicals, when present in extraterrestrial samples, afford precious insight into past and modern conditions elsewhere in the Solar System . No single technology identifies all molecular components because naturally occurring molecules have different chemistries (e.g., polar vs. non-polar, low to high molecular weight) and interface with the ambient sample chemistry in a variety of modes (i.e., organics may be bonded, absorbed or trapped by minerals, liquids, gases, or other organics). More than 90% of organic matter in most natural samples on Earth and in meteorites is composed of complex macromolecules (e.g. biopolymers, complex biomolecules, humic substances, kerogen) because the processes that tend to break down organic molecules also tend towards complexation of the more recalcitrant components. Thus, methodologies that tap the molecular information contained within macromolecules may be critical to detecting extraterrestrial organic matter and assessing the sources and processes influencing its nature.

  16. Bovine Milk as a Source of Functional Oligosaccharides for Improving Human Health12

    PubMed Central

    Zivkovic, Angela M.; Barile, Daniela

    2011-01-01

    Human milk oligosaccharides are complex sugars that function as selective growth substrates for specific beneficial bacteria in the gastrointestinal system. Bovine milk is a potentially excellent source of commercially viable analogs of these unique molecules. However, bovine milk has a much lower concentration of these oligosaccharides than human milk, and the majority of the molecules are simpler in structure than those found in human milk. Specific structural characteristics of milk-derived oligosaccharides are crucial to their ability to selectively enrich beneficial bacteria while inhibiting or being less than ideal substrates for undesirable and pathogenic bacteria. Thus, if bovine milk products are to provide human milk–like benefits, it is important to identify specific dairy streams that can be processed commercially and cost-effectively and that can yield specific oligosaccharide compositions that will be beneficial as new food ingredients or supplements to improve human health. Whey streams have the potential to be commercially viable sources of complex oligosaccharides that have the structural resemblance and diversity of the bioactive oligosaccharides in human milk. With further refinements to dairy stream processing techniques and functional testing to identify streams that are particularly suitable for enriching beneficial intestinal bacteria, the future of oligosaccharides isolated from dairy streams as a food category with substantiated health claims is promising. PMID:22332060

  17. Bovine milk as a source of functional oligosaccharides for improving human health.

    PubMed

    Zivkovic, Angela M; Barile, Daniela

    2011-05-01

    Human milk oligosaccharides are complex sugars that function as selective growth substrates for specific beneficial bacteria in the gastrointestinal system. Bovine milk is a potentially excellent source of commercially viable analogs of these unique molecules. However, bovine milk has a much lower concentration of these oligosaccharides than human milk, and the majority of the molecules are simpler in structure than those found in human milk. Specific structural characteristics of milk-derived oligosaccharides are crucial to their ability to selectively enrich beneficial bacteria while inhibiting or being less than ideal substrates for undesirable and pathogenic bacteria. Thus, if bovine milk products are to provide human milk-like benefits, it is important to identify specific dairy streams that can be processed commercially and cost-effectively and that can yield specific oligosaccharide compositions that will be beneficial as new food ingredients or supplements to improve human health. Whey streams have the potential to be commercially viable sources of complex oligosaccharides that have the structural resemblance and diversity of the bioactive oligosaccharides in human milk. With further refinements to dairy stream processing techniques and functional testing to identify streams that are particularly suitable for enriching beneficial intestinal bacteria, the future of oligosaccharides isolated from dairy streams as a food category with substantiated health claims is promising.

  18. Multi-isotope tracers to investigate processes in the Elbe, Weser and Ems river catchment using B, Mo, Sr, and Pb isotope ratios assessed by MC ICP-MS

    NASA Astrophysics Data System (ADS)

    Irrgeher, Johanna; Reese, Anna; Zimmermann, Tristan; Prohaska, Thomas; Retzmann, Anika; Wieser, Michael E.; Zitek, Andreas; Proefrock, Daniel

    2017-04-01

    Environmental monitoring of complex ecosystems requires reliable sensitive techniques based on sound analytical strategies to identify the source, fate and sink of elements and matter. Isotopic signatures can serve to trace pathways by making use of specific isotopic fingermarks or to distinguish between natural and anthropogenic sources. The presented work shows the potential of using the isotopic variation of Sr, Pb (as well-established isotopic systems), Mo and B (as novel isotopic system) assessed by MC ICP-MS in water and sediment samples to study aquatic ecosystem transport processes. The isotopic variation of Sr, Pb, Mo and B was determined in different marine and estuarine compartments covering the catchment of the German Wadden Sea and its main tributaries, the Elbe, Weser and Ems River. The varying elemental concentrations, the complex matrix and the expected small variations in the isotopic composition required the development and application of reliable analytical measurement approaches as well as suited metrological data evaluation strategies. Aquatic isoscapes were created using ArcGIS® by relating spatial isotopic data with geographical and geological maps. The elemental and isotopic distribution maps show large variation for different parameters and also reflect the numerous impact factors (e.g. geology, anthropogenic sources) influencing the catchment area.

  19. Three-dimensional seismic structure and moment tensors of non-double-couple earthquakes at the Hengill-Grensdalur volcanic complex, Iceland

    USGS Publications Warehouse

    Miller, A.D.; Julian, B.R.; Foulger, G.R.

    1998-01-01

    The volcanic and geothermal areas of Iceland are rich sources of non-double-couple (non-DC) earthquakes. A state-of-the-art digital seismometer network deployed at the Hengill-Grensdalur volcanic complex in 1991 recorded 4000 small earthquakes. We used the best recorded of these to determine 3-D VP and VP/VS structure tomographically and accurate earthquake moment tensors. The VP field is dominated by high seismic wave speed bodies interpreted as solidified intrusions. A widespread negative (-4 per cent) VP/VS anomaly in the upper 4 km correlates with the geothermal field, but is too strong to be caused solely by the effect of temperature upon liquid water or the presence of vapour, and requires in addition mineralogical or lithological differences between the geothermal reservoir and its surroundings. These may be caused by geothermal alteration. Well-constrained moment tensors were obtained for 70 of the best-recorded events by applying linear programming methods to P- and S-wave polarities and amplitude ratios. About 25 per cent of the mechanisms are, within observational error, consistent with DC mechanisms consistent with shear faulting. The other 75 per cent have significantly non-DC mechanisms. Many have substantial explosive components, one has a substantial implosive component, and the deviatoric component of many is strongly non-DC. Many of the non-DC mechanisms are consistent, within observational error, with simultaneous tensile and shear faulting. However, the mechanisms occupy a continuum in source-type parameter space and probably at least one additional source process is occurring. This may be fluid flow into newly formed cracks, causing partial compensation of the volumetric component. Studying non-shear earthquakes such as these has great potential for improving our understanding of geothermal processes and earthquake source processes in general.

  20. Computationally efficient thermal-mechanical modelling of selective laser melting

    NASA Astrophysics Data System (ADS)

    Yang, Yabin; Ayas, Can

    2017-10-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.

  1. Geodynamics branch data base for main magnetic field analysis

    NASA Technical Reports Server (NTRS)

    Langel, Robert A.; Baldwin, R. T.

    1991-01-01

    The data sets used in geomagnetic field modeling at GSFC are described. Data are measured and obtained from a variety of information and sources. For clarity, data sets from different sources are categorized and processed separately. The data base is composed of magnetic observatory data, surface data, high quality aeromagnetic, high quality total intensity marine data, satellite data, and repeat data. These individual data categories are described in detail in a series of notebooks in the Geodynamics Branch, GSFC. This catalog reviews the original data sets, the processing history, and the final data sets available for each individual category of the data base and is to be used as a reference manual for the notebooks. Each data type used in geomagnetic field modeling has varying levels of complexity requiring specialized processing routines for satellite and observatory data and two general routines for processing aeromagnetic, marine, land survey, and repeat data.

  2. Conducting requirements analyses for research using routinely collected health data: a model driven approach.

    PubMed

    de Lusignan, Simon; Cashman, Josephine; Poh, Norman; Michalakidis, Georgios; Mason, Aaron; Desombre, Terry; Krause, Paul

    2012-01-01

    Medical research increasingly requires the linkage of data from different sources. Conducting a requirements analysis for a new application is an established part of software engineering, but rarely reported in the biomedical literature; and no generic approaches have been published as to how to link heterogeneous health data. Literature review, followed by a consensus process to define how requirements for research, using, multiple data sources might be modeled. We have developed a requirements analysis: i-ScheDULEs - The first components of the modeling process are indexing and create a rich picture of the research study. Secondly, we developed a series of reference models of progressive complexity: Data flow diagrams (DFD) to define data requirements; unified modeling language (UML) use case diagrams to capture study specific and governance requirements; and finally, business process models, using business process modeling notation (BPMN). These requirements and their associated models should become part of research study protocols.

  3. Embedded Acoustic Sensor Array for Engine Fan Noise Source Diagnostic Test: Feasibility of Noise Telemetry via Wireless Smart Sensors

    NASA Technical Reports Server (NTRS)

    Zaman, Afroz; Bauch, Matthew; Raible, Daniel

    2011-01-01

    Aircraft engines have evolved into a highly complex system to meet ever-increasing demands. The evolution of engine technologies has primarily been driven by fuel efficiency, reliability, as well as engine noise concerns. One of the sources of engine noise is pressure fluctuations that are induced on the stator vanes. These local pressure fluctuations, once produced, propagate and coalesce with the pressure waves originating elsewhere on the stator to form a spinning pressure pattern. Depending on the duct geometry, air flow, and frequency of fluctuations, these spinning pressure patterns are self-sustaining and result in noise which eventually radiate to the far-field from engine. To investigate the nature of vane pressure fluctuations and the resulting engine noise, unsteady pressure signatures from an array of embedded acoustic sensors are recorded as a part of vane noise source diagnostics. Output time signatures from these sensors are routed to a control and data processing station adding complexity to the system and cable loss to the measured signal. "Smart" wireless sensors have data processing capability at the sensor locations which further increases the potential of wireless sensors. Smart sensors can process measured data locally and transmit only the important information through wireless communication. The aim of this wireless noise telemetry task was to demonstrate a single acoustic sensor wireless link for unsteady pressure measurement, and thus, establish the feasibility of distributed smart sensors scheme for aircraft engine vane surface unsteady pressure data transmission and characterization.

  4. Green Adsorbents for Wastewaters: A Critical Review

    PubMed Central

    Kyzas, George Z.; Kostoglou, Margaritis

    2014-01-01

    One of the most serious environmental problems is the existence of hazardous and toxic pollutants in industrial wastewaters. The major hindrance is the simultaneous existence of many/different types of pollutants as (i) dyes; (ii) heavy metals; (iii) phenols; (iv) pesticides and (v) pharmaceuticals. Adsorption is considered to be one of the most promising techniques for wastewater treatment over the last decades. The economic crisis of the 2000s led researchers to turn their interest in adsorbent materials with lower cost. In this review article, a new term will be introduced, which is called “green adsorption”. Under this term, it is meant the low-cost materials originated from: (i) agricultural sources and by-products (fruits, vegetables, foods); (ii) agricultural residues and wastes; (iii) low-cost sources from which most complex adsorbents will be produced (i.e., activated carbons after pyrolysis of agricultural sources). These “green adsorbents” are expected to be inferior (regarding their adsorption capacity) to the super-adsorbents of previous literature (complex materials as modified chitosans, activated carbons, structurally-complex inorganic composite materials etc.), but their cost-potential makes them competitive. This review is a critical approach to green adsorption, discussing many different (maybe in some occasions doubtful) topics such as: (i) adsorption capacity; (ii) kinetic modeling (given the ultimate target to scale up the batch experimental data to fixed-bed column calculations for designing/optimizing commercial processes) and (iii) critical techno-economical data of green adsorption processes in order to scale-up experiments (from lab to industry) with economic analysis and perspectives of the use of green adsorbents. PMID:28788460

  5. Time-reversed waves and super-resolution

    NASA Astrophysics Data System (ADS)

    Fink, Mathias; de Rosny, Julien; Lerosey, Geoffroy; Tourin, Arnaud

    2009-06-01

    Time-reversal mirrors (TRMs) refocus an incident wavefield to the position of the original source regardless of the complexity of the propagation medium. TRMs have now been implemented in a variety of physical scenarios from GHz microwaves to MHz ultrasonics and to hundreds of Hz in ocean acoustics. Common to this broad range of scales is a remarkable robustness exemplified by observations at all scales that the more complex the medium (random or chaotic), the sharper the focus. A TRM acts as an antenna that uses complex environments to appear wider than it is, resulting for a broadband pulse, in a refocusing quality that does not depend on the TRM aperture. Moreover, when the complex environment is located in the near field of the source, time-reversal focusing opens completely new approaches to super-resolution. We will show that, for a broadband source located inside a random metamaterial, a TRM located in the far field radiated a time-reversed wave that interacts with the random medium to regenerate not only the propagating but also the evanescent waves required to refocus below the diffraction limit. This focusing process is very different from that developed with superlenses made of negative index material only valid for narrowband signals. We will emphasize the role of the frequency diversity in time-reversal focusing. To cite this article: M. Fink et al., C. R. Physique 10 (2009).

  6. Source-based neurofeedback methods using EEG recordings: training altered brain activity in a functional brain source derived from blind source separation

    PubMed Central

    White, David J.; Congedo, Marco; Ciorciari, Joseph

    2014-01-01

    A developing literature explores the use of neurofeedback in the treatment of a range of clinical conditions, particularly ADHD and epilepsy, whilst neurofeedback also provides an experimental tool for studying the functional significance of endogenous brain activity. A critical component of any neurofeedback method is the underlying physiological signal which forms the basis for the feedback. While the past decade has seen the emergence of fMRI-based protocols training spatially confined BOLD activity, traditional neurofeedback has utilized a small number of electrode sites on the scalp. As scalp EEG at a given electrode site reflects a linear mixture of activity from multiple brain sources and artifacts, efforts to successfully acquire some level of control over the signal may be confounded by these extraneous sources. Further, in the event of successful training, these traditional neurofeedback methods are likely influencing multiple brain regions and processes. The present work describes the use of source-based signal processing methods in EEG neurofeedback. The feasibility and potential utility of such methods were explored in an experiment training increased theta oscillatory activity in a source derived from Blind Source Separation (BSS) of EEG data obtained during completion of a complex cognitive task (spatial navigation). Learned increases in theta activity were observed in two of the four participants to complete 20 sessions of neurofeedback targeting this individually defined functional brain source. Source-based EEG neurofeedback methods using BSS may offer important advantages over traditional neurofeedback, by targeting the desired physiological signal in a more functionally and spatially specific manner. Having provided preliminary evidence of the feasibility of these methods, future work may study a range of clinically and experimentally relevant brain processes where individual brain sources may be targeted by source-based EEG neurofeedback. PMID:25374520

  7. Source Complexity of an Injection Induced Event: The 2016 Mw 5.1 Fairview, Oklahoma Earthquake

    NASA Astrophysics Data System (ADS)

    López-Comino, J. A.; Cesca, S.

    2018-05-01

    Complex rupture processes are occasionally resolved for weak earthquakes and can reveal a dominant direction of the rupture propagation and the presence and geometry of main slip patches. Finding and characterizing such properties could be important for understanding the nucleation and growth of induced earthquakes. One of the largest earthquakes linked to wastewater injection, the 2016 Mw 5.1 Fairview, Oklahoma earthquake, is analyzed using empirical Green's function techniques to reveal its source complexity. Two subevents are clearly identified and located using a new approach based on relative hypocenter-centroid location. The first subevent has a magnitude of Mw 5.0 and shows the main rupture propagated toward the NE, in the direction of higher pore pressure perturbations due to wastewater injection. The second subevent appears as an early aftershock with lower magnitude, Mw 4.7. It is located SW of the mainshock in a region of increased Coulomb stress, where most aftershocks relocated.

  8. An improved method for polarimetric image restoration in interferometry

    NASA Astrophysics Data System (ADS)

    Pratley, Luke; Johnston-Hollitt, Melanie

    2016-11-01

    Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarization P, this approach fails to properly account for the complex vector nature resulting in a process which is dependent on the axes under which the deconvolution is performed. We present here an improved method, `Generalized Complex CLEAN', which properly accounts for the complex vector nature of polarized emission and is invariant under rotations of the deconvolution axes. We use two Australia Telescope Compact Array data sets to test standard and complex CLEAN versions of the Högbom and SDI (Steer-Dwedney-Ito) CLEAN algorithms. We show that in general the complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular, we find that the complex SDI CLEAN produces the best results for diffuse polarized sources as compared with standard CLEAN algorithms and other complex CLEAN algorithms. Given the move to wide-field, high-resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalized Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of an SDI CLEAN should be used.

  9. “Gestaltomics”: Systems Biology Schemes for the Study of Neuropsychiatric Diseases

    PubMed Central

    Gutierrez Najera, Nora A.; Resendis-Antonio, Osbaldo; Nicolini, Humberto

    2017-01-01

    The integration of different sources of biological information about what defines a behavioral phenotype is difficult to unify in an entity that reflects the arithmetic sum of its individual parts. In this sense, the challenge of Systems Biology for understanding the “psychiatric phenotype” is to provide an improved vision of the shape of the phenotype as it is visualized by “Gestalt” psychology, whose fundamental axiom is that the observed phenotype (behavior or mental disorder) will be the result of the integrative composition of every part. Therefore, we propose the term “Gestaltomics” as a term from Systems Biology to integrate data coming from different sources of information (such as the genome, transcriptome, proteome, epigenome, metabolome, phenome, and microbiome). In addition to this biological complexity, the mind is integrated through multiple brain functions that receive and process complex information through channels and perception networks (i.e., sight, ear, smell, memory, and attention) that in turn are programmed by genes and influenced by environmental processes (epigenetic). Today, the approach of medical research in human diseases is to isolate one disease for study; however, the presence of an additional disease (co-morbidity) or more than one disease (multimorbidity) adds complexity to the study of these conditions. This review will present the challenge of integrating psychiatric disorders at different levels of information (Gestaltomics). The implications of increasing the level of complexity, for example, studying the co-morbidity with another disease such as cancer, will also be discussed. PMID:28536537

  10. Heparin and related polysaccharides: Synthesis using recombinant enzymes and metabolic engineering

    PubMed Central

    Suflita, Matthew; Fu, Li; He, Wenqin; Koffas, Mattheos; Linhardt, Robert J.

    2015-01-01

    Glycosaminoglycans are linear anionic polysaccharides that exhibit a number of important biological and pharmacological activities. The two most prominent members of this class of polysaccharides are heparin/heparan sulfate and the chondroitin sulfates (including dermatan sulfate). These polysaccharides, having complex structures and polydispersity, are biosynthesized in the Golgi of most animal cells. The chemical synthesis of these glycosaminoglycans is precluded by their structural complexity. Today, we depend on food animal tissues for their isolation and commercial production. Ton quantities of these glycosaminoglycans are used annually as pharmaceuticals and nutraceuticals. The variability of animal-sourced glycosaminoglycans, their inherent impurities, the limited availability of source tissues, the poor control of these source materials, and their manufacturing processes, suggest a need for new approaches for their production. Over the past decade there have been major efforts in the biotechnological production of these glycosaminoglycans. This mini-review focuses on the use of recombinant enzymes and metabolic engineering for the production of heparin and chondroitin sulfates. PMID:26219501

  11. Shallow seismicity in volcanic system: what role does the edifice play?

    NASA Astrophysics Data System (ADS)

    Bean, Chris; Lokmer, Ivan

    2017-04-01

    Seismicity in the upper two kilometres in volcanic systems is complex and very diverse in nature. The origins lie in the multi-physics nature of source processes and in the often extreme heterogeneity in near surface structure, which introduces strong seismic wave propagation path effects that often 'hide' the source itself. Other complicating factors are that we are often in the seismic near-field so waveforms can be intrinsically more complex than in far-field earthquake seismology. The traditional focus for an explanation of the diverse nature of shallow seismic signals is to call on the direct action of fluids in the system. Fits to model data are then used to elucidate properties of the plumbing system. Here we show that solutions based on these conceptual models are not unique and that models based on a diverse range of quasi-brittle failure of low stiffness near surface structures are equally valid from a data fit perspective. These earthquake-like sources also explain aspects of edifice deformation that are as yet poorly quantified.

  12. Deviant Earthquakes: Data-driven Constraints on the Variability in Earthquake Source Properties and Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel Taylor

    The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of southern California seismicity. Chapter 6 builds upon these results and applies the same spectral decomposition technique to examine the source properties of several thousand recent earthquakes in southern Kansas that are likely human-induced by massive oil and gas operations in the region. Chapter 7 studies the connection between source spectral properties and earthquake hazard, focusing on spatial variations in dynamic stress drop and its influence on ground motion amplitudes. Finally, Chapter 8 provides a summary of the key findings of and relations between these studies, and outlines potential avenues of future research.

  13. Golf ball-assisted electrospray ionization of mass spectrometry for the determination of trace amino acids in complex samples.

    PubMed

    Li, Yen-Hsien; Chen, Chung-Yu; Kuo, Cheng-Hsiung; Lee, Maw-Rong

    2016-09-28

    During the electrospray ionization (ESI) process, ions move through a heated capillary aperture to be detected on arrival at a mass analyzer. However, the ESI process creates an ion plume, which expands into an ion cloud with an area larger than that of the heated capillary aperture, significantly contributing to an ion loss of 50% due to coulombic repulsion. The use of DC and RF fields to focus ions from the ion source into the vacuum chamber has been proposed in the literature, but the improvement of ion transmission efficiency is limited. To improve ion transmission, in this study we propose a novel method using a home-made golf ball positioned between the ion source and the inlet of the mass analyzer to hydrodynamically focus the ions passing through the golf ball. The ion plume produced by the ESI process passes through the golf ball will reduce the size of the ion cloud then be focused and most of them flowed into the mass analyzer. Therefore, the sensitivity will be improved, the aim of this investigation is to study the enhancing of the signal using golf ball-assisted electrospray ionization liquid chromatography tandem mass spectrometry (LC-MS/MS) to determine 20 trace amino acids in complex samples, including tea, urine and serum. The results showed that the analytical performance of the determination of the 20 amino acids in tea, urine and serum samples using the home-made golf ball-assisted ESI source is better than that of a commercial ESI source. The signal intensities of the 20 amino acids were enhanced by factors of 2-2700, 11-2525, and 31-342680 in oolong tea, urine and serum analyses, respectively. The precision of the proposed method ranged from 1-9%, 0.4-9% and 0.4-8% at low, medium and high concentration levels of amino acids, respectively. The home-made golf ball-assisted ESI source effectively increased the signal intensity and enhanced the ion transmission efficiency and is also an easy, convenient and economical device. This technique can be applied to the analysis of trace compounds in complex matrices. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. The big data-big model (BDBM) challenges in ecological research

    NASA Astrophysics Data System (ADS)

    Luo, Y.

    2015-12-01

    The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple, heterogeneous data sets; intractability of structural complexity of big models; equifinality of model structure selection and parameter estimation; and computational demand of global optimization with Big Models.

  15. Identifying Sources of Clinical Conflict: A Tool for Practice and Training in Bioethics Mediation.

    PubMed

    Bergman, Edward J

    2015-01-01

    Bioethics mediators manage a wide range of clinical conflict emanating from diverse sources. Parties to clinical conflict are often not fully aware of, nor willing to express, the true nature and scope of their conflict. As such, a significant task of the bioethics mediator is to help define that conflict. The ability to assess and apply the tools necessary for an effective mediation process can be facilitated by each mediator's creation of a personal compendium of sources that generate clinical conflict, to provide an orientation for the successful management of complex dilemmatic cases. Copyright 2015 The Journal of Clinical Ethics. All rights reserved.

  16. BLENDING LOW ENRICHED URANIUM WITH DEPLETED URANIUM TO CREATE A SOURCE MATERIAL ORE THAT CAN BE PROCESSED FOR THE RECOVERY OF YELLOWCAKE AT A CONVENTIONAL URANIUM MILL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schutt, Stephen M.; Hochstein, Ron F.; Frydenlund, David C.

    2003-02-27

    Throughout the United States Department of Energy (DOE) complex, there are a number of streams of low enriched uranium (LEU) that contain various trace contaminants. These surplus nuclear materials require processing in order to meet commercial fuel cycle specifications. To date, they have not been designated as waste for disposal at the DOE's Nevada Test Site (NTS). Currently, with no commercial outlet available, the DOE is evaluating treatment and disposal as the ultimate disposition path for these materials. This paper will describe an innovative program that will provide a solution to DOE that will allow disposition of these materials atmore » a cost that will be competitive with treatment and disposal at the NTS, while at the same time recycling the material to recover a valuable energy resource (yellowcake) for reintroduction into the commercial nuclear fuel cycle. International Uranium (USA) Corporation (IUSA) and Nuclear Fuel Services, Inc. (NFS) have entered into a commercial relationship to pursue the development of this program. The program involves the design of a process and construction of a plant at NFS' site in Erwin, Tennessee, for the blending of contaminated LEU with depleted uranium (DU) to produce a uranium source material ore (USM Ore{trademark}). The USM Ore{trademark} will then be further processed at IUC's White Mesa Mill, located near Blanding, Utah, to produce conventional yellowcake, which can be delivered to conversion facilities, in the same manner as yellowcake that is produced from natural ores or other alternate feed materials. The primary source of feed for the business will be the significant sources of trace contaminated materials within the DOE complex. NFS has developed a dry blending process (DRYSM Process) to blend the surplus LEU material with DU at its Part 70 licensed facility, to produce USM Ore{trademark} with a U235 content within the range of U235 concentrations for source material. By reducing the U235 content to source material levels in this manner, the material will be suitable for processing at a conventional uranium mill under its existing Part 40 license to remove contaminants and enable the product to re-enter the commercial fuel cycle. The tailings from processing the USM Ore{trademark} at the mill will be permanently disposed of in the mill's tailings impoundment as 11e.(2) byproduct material. Blending LEU with DU to make a uranium source material ore that can be returned to the nuclear fuel cycle for processing to produce yellowcake, has never been accomplished before. This program will allow DOE to disposition its surplus LEU and DU in a cost effective manner, and at the same time provide for the recovery of valuable energy resources that would be lost through processing and disposal of the materials. This paper will discuss the nature of the surplus LEU and DU materials, the manner in which the LEU will be blended with DU to form a uranium source material ore, and the legal means by which this blending can be accomplished at a facility licensed under 10 CFR Part 70 to produce ore that can be processed at a conventional uranium mill licensed under 10 CFR Part 40.« less

  17. A Spatially Continuous Model of Carbohydrate Digestion and Transport Processes in the Colon

    PubMed Central

    Moorthy, Arun S.; Brooks, Stephen P. J.; Kalmokoff, Martin; Eberl, Hermann J.

    2015-01-01

    A spatially continuous mathematical model of transport processes, anaerobic digestion and microbial complexity as would be expected in the human colon is presented. The model is a system of first-order partial differential equations with context determined number of dependent variables, and stiff, non-linear source terms. Numerical simulation of the model is used to elucidate information about the colon-microbiota complex. It is found that the composition of materials on outflow of the model does not well-describe the composition of material in other model locations, and inferences using outflow data varies according to model reactor representation. Additionally, increased microbial complexity allows the total microbial community to withstand major system perturbations in diet and community structure. However, distribution of strains and functional groups within the microbial community can be modified depending on perturbation length and microbial kinetic parameters. Preliminary model extensions and potential investigative opportunities using the computational model are discussed. PMID:26680208

  18. A role for the membrane Golgi protein Ema in autophagy.

    PubMed

    Kim, Sungsu; DiAntonio, Aaron

    2012-08-01

    Autophagy is a cellular homeostatic response that involves degradation of self-components by the double-membraned autophagosome. The biogenesis of autophagosomes has been well described, but the ensuing processes after autophagosome formation are not clear. In our recent study, we proposed a model in which the Golgi complex contributes to the growth of autophagic structures, and that the Drosophila melanogaster membrane protein Ema promotes this process. In fat body cells of the D. melanogaster ema mutant, the recruitment of the Golgi complex protein Lava lamp (Lva) to autophagic structures is impaired and autophagic structures are very small. In addition, in the ema mutant autophagic turnover of SQSTM1/p62 and mitophagy are impaired. Our study not only identifies a role for Ema in autophagy, but also supports the hypothesis that the Golgi complex may be a potential membrane source for the biogenesis and development of autophagic structures.

  19. Factors governing dissolution process of lignocellulosic biomass in ionic liquid: current status, overview and challenges.

    PubMed

    Badgujar, Kirtikumar C; Bhanage, Bhalchandra M

    2015-02-01

    The utilisation of non-feed lignocellulosic biomass as a source of renewable bio-energy and synthesis of fine chemical products is necessary for the sustainable development. The methods for the dissolution of lignocellulosic biomass in conventional solvents are complex and tedious due to the complex chemical ultra-structure of biomass. In view of this, recent developments for the use of ionic liquid solvent (IL) has received great attention, as ILs can solubilise such complex biomass and thus provides industrial scale-up potential. In this review, we have discussed the state-of-art for the dissolution of lignocellulosic material in representative ILs. Furthermore, various process parameters and their influence for biomass dissolution were reviewed. In addition to this, overview of challenges and opportunities related to this interesting area is presented. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    NASA Astrophysics Data System (ADS)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  1. Predictive Big Data Analytics: A Study of Parkinson's Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations.

    PubMed

    Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W

    2016-01-01

    A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.

  2. DAMAS Processing for a Phased Array Study in the NASA Langley Jet Noise Laboratory

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M.; Plassman, Gerald e.

    2010-01-01

    A jet noise measurement study was conducted using a phased microphone array system for a range of jet nozzle configurations and flow conditions. The test effort included convergent and convergent/divergent single flow nozzles, as well as conventional and chevron dual-flow core and fan configurations. Cold jets were tested with and without wind tunnel co-flow, whereas, hot jets were tested only with co-flow. The intent of the measurement effort was to allow evaluation of new phased array technologies for their ability to separate and quantify distributions of jet noise sources. In the present paper, the array post-processing method focused upon is DAMAS (Deconvolution Approach for the Mapping of Acoustic Sources) for the quantitative determination of spatial distributions of noise sources. Jet noise is highly complex with stationary and convecting noise sources, convecting flows that are the sources themselves, and shock-related and screech noise for supersonic flow. The analysis presented in this paper addresses some processing details with DAMAS, for the array positioned at 90 (normal) to the jet. The paper demonstrates the applicability of DAMAS and how it indicates when strong coherence is present. Also, a new approach to calibrating the array focus and position is introduced and demonstrated.

  3. Detailed Chemical Characterization of Unresolved Complex Mixtures (UCM) inAtmospheric Organics: Insights into Emission Sources, Atmospheric Processing andSecondary Organic Aerosol Formation

    EPA Science Inventory

    Recent studies suggest that semivolatile organic compounds (SVOCs) are important precursors to secondary organic aerosol (SOA) in urban atmospheres. However, knowledge of the chemical composition of SVOCs is limited by current analytical techniques, which are typically unable to...

  4. Near-road enhancement and solubility of fine and coarse particulate matter trace elements near a major interstate in Detroit, Michigan

    EPA Science Inventory

    Communities near major roadways are disproportionately affected by traffic-related air pollution which can contribute to adverse health outcomes. The specific role of particulate matter (PM) from traffic sources is not fully understood due to complex emissions processes and physi...

  5. EVALUATING EFFECTS OF LOW QUALITY HABITATS ON REGIONAL GROWTH IN PEOMYCUS LEUCOPUS: INSIGHTS FROM FIELD-PARAMETERIZED SPATIAL MATRIX MODELS.

    EPA Science Inventory

    Due to complex population dynamics and source-sink metapopulation processes, animal fitness sometimes varies across landscapes in ways that cannot be deduced from simple density patterns. In this study, we examine spatial patterns in fitness using a combination of intensive fiel...

  6. Making ResourceFULL™ Decisions: A Process Model for Civic Engagement

    ERIC Educational Resources Information Center

    Radke, Barbara; Chazdon, Scott

    2015-01-01

    Many public issues are becoming more complex, interconnected, and cannot be resolved by one individual or entity. Research shows an informed decision is not enough. Addressing these issues requires authentic civic engagement (deliberative dialogue) with the public to reach resourceFULL™ decisions--a decision based on diverse sources of information…

  7. English as a Lingua Franca: A Source of Identity for Young Europeans?

    ERIC Educational Resources Information Center

    Gnutzmann, Claus; Jakisch, Jenny; Rabe, Frank

    2014-01-01

    As a result of globalisation and the European integration process, identity concepts of young Europeans are becoming more and more diverse and possibly heterogeneous. The factors that influence the development of identity formation and impact on identity constructions are complex--but language seems to be of central importance. It is generally…

  8. Facing climate change in forests and fields

    Treesearch

    Amy Daniels; Nancy Shaw; Dave Peterson; Keith Nislow; Monica Tomosy; Mary Rowland

    2014-01-01

    As a growing body of science shows, climate change impacts on wildlife are already profound - from shifting species' ranges and altering the synchronicity of food sources to changing the availability of water. Such impacts are only expected to increase in the coming decades. As climate change shapes complex, interwoven ecological processes, novel conditions and...

  9. Measuring Spontaneous and Instructed Evaluation Processes during Web Search: Integrating Concurrent Thinking-Aloud Protocols and Eye-Tracking Data

    ERIC Educational Resources Information Center

    Gerjets, Peter; Kammerer, Yvonne; Werner, Benita

    2011-01-01

    Web searching for complex information requires to appropriately evaluating diverse sources of information. Information science studies identified different criteria applied by searchers to evaluate Web information. However, the explicit evaluation instructions used in these studies might have resulted in a distortion of spontaneous evaluation…

  10. Real-time complex event processing for cloud resources

    NASA Astrophysics Data System (ADS)

    Adam, M.; Cordeiro, C.; Field, L.; Giordano, D.; Magnoni, L.

    2017-10-01

    The ongoing integration of clouds into the WLCG raises the need for detailed health and performance monitoring of the virtual resources in order to prevent problems of degraded service and interruptions due to undetected failures. When working in scale, the existing monitoring diversity can lead to a metric overflow whereby the operators need to manually collect and correlate data from several monitoring tools and frameworks, resulting in tens of different metrics to be constantly interpreted and analyzed per virtual machine. In this paper we present an ESPER based standalone application which is able to process complex monitoring events coming from various sources and automatically interpret data in order to issue alarms upon the resources’ statuses, without interfering with the actual resources and data sources. We will describe how this application has been used with both commercial and non-commercial cloud activities, allowing the operators to quickly be alarmed and react to misbehaving VMs and LHC experiments’ workflows. We will present the pattern analysis mechanisms being used, as well as the surrounding Elastic and REST API interfaces where the alarms are collected and served to users.

  11. Physical Conditions of Eta Car Complex Environment Revealed From Photoionization Modeling

    NASA Technical Reports Server (NTRS)

    Verner, E. M.; Bruhweiler, F.; Nielsen, K. E.; Gull, T.; Kober, G. Vieira; Corcoran, M.

    2006-01-01

    The very massive star, Eta Carinae, is enshrouded in an unusual complex environment of nebulosities and structures. The circumstellar gas gives rise to distinct absorption and emission components at different velocities and distances from the central source(s). Through photoionization modeling, we find that the radiation field from the more massive B-star companion supports the low ionization structure throughout the 5.54 year period. The radiation field of an evolved O-star is required to produce the higher ionization . emission seen across the broad maximum. Our studies utilize the HST/STIS data and model calculations of various regimes from doubly ionized species (T= 10,000K) to the low temperature (T = 760 K) conditions conductive to molecule formation (CH and OH). Overall analysis suggests the high depletion in C and O and the enrichment in He and N. The sharp molecular and ionic absorptions in this extensively CNO - processed material offers a unique environment for studying the chemistry, dust formation processes, and nucleosynthesis in the ejected layers of a highly evolved massive star.

  12. Integration science and distributed networks

    NASA Astrophysics Data System (ADS)

    Landauer, Christopher; Bellman, Kirstie L.

    2002-07-01

    Our work on integration of data and knowledge sources is based in a common theoretical treatment of 'Integration Science', which leads to systematic processes for combining formal logical and mathematical systems, computational and physical systems, and human systems and organizations. The theory is based on the processing of explicit meta-knowledge about the roles played by the different knowledge sources and the methods of analysis and semantic implications of the different data values, together with information about the context in which and the purpose for which they are being combined. The research treatment is primarily mathematical, and though this kind of integration mathematics is still under development, there are some applicable common threads that have emerged already. Instead of describing the current state of the mathematical investigations, since they are not yet crystallized enough for formalisms, we describe our applications of the approach in several different areas, including our focus area of 'Constructed Complex Systems', which are complex heterogeneous systems managed or mediated by computing systems. In this context, it is important to remember that all systems are embedded, all systems are autonomous, and that all systems are distributed networks.

  13. Machine learning reveals cyclic changes in seismic source spectra in Geysers geothermal field

    PubMed Central

    Paisley, John

    2018-01-01

    The earthquake rupture process comprises complex interactions of stress, fracture, and frictional properties. New machine learning methods demonstrate great potential to reveal patterns in time-dependent spectral properties of seismic signals and enable identification of changes in faulting processes. Clustering of 46,000 earthquakes of 0.3 < ML < 1.5 from the Geysers geothermal field (CA) yields groupings that have no reservoir-scale spatial patterns but clear temporal patterns. Events with similar spectral properties repeat on annual cycles within each cluster and track changes in the water injection rates into the Geysers reservoir, indicating that changes in acoustic properties and faulting processes accompany changes in thermomechanical state. The methods open new means to identify and characterize subtle changes in seismic source properties, with applications to tectonic and geothermal seismicity. PMID:29806015

  14. The Complex Sol-Gel Process for producing small ThO2 microspheres

    NASA Astrophysics Data System (ADS)

    Brykala, Marcin; Rogowski, Marcin

    2016-05-01

    Thorium based fuels offer several benefits compared to uranium based fuels thus they might be an attractive alternative to conventional fuel types. This study is devoted to the synthesis and the characterization of small thorium dioxide microspheres (Ø <50 μm). Their application involves using powder-free process, called the Complex Sol-Gel Process. The source sols used for the processes were prepared by the method where in the starting ascorbic acid solution the solid thorium nitrate was dissolved and partially neutralized by aqueous ammonia under pH control. The microspheres of thorium-ascorbate gel were obtained using the ICHTJ Process (INCT in English). Studies allowed to determine an optimal heat treatment with calcination temperature of 700 °C and temperature rate not higher than 2 °C/min which enabled us to obtain a crack-free surface of microspheres. The main parameters which have a strong influence on the synthesis method and features of the spherical particles of thorium dioxide are described in this article.

  15. Nuclear Criticality Safety Data Book

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollenbach, D. F.

    The objective of this document is to support the revision of criticality safety process studies (CSPSs) for the Uranium Processing Facility (UPF) at the Y-12 National Security Complex (Y-12). This design analysis and calculation (DAC) document contains development and justification for generic inputs typically used in Nuclear Criticality Safety (NCS) DACs to model both normal and abnormal conditions of processes at UPF to support CSPSs. This will provide consistency between NCS DACs and efficiency in preparation and review of DACs, as frequently used data are provided in one reference source.

  16. BioImageXD: an open, general-purpose and high-throughput image-processing platform.

    PubMed

    Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J

    2012-06-28

    BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.

  17. Towards the understanding of network information processing in biology

    NASA Astrophysics Data System (ADS)

    Singh, Vijay

    Living organisms perform incredibly well in detecting a signal present in the environment. This information processing is achieved near optimally and quite reliably, even though the sources of signals are highly variable and complex. The work in the last few decades has given us a fair understanding of how individual signal processing units like neurons and cell receptors process signals, but the principles of collective information processing on biological networks are far from clear. Information processing in biological networks, like the brain, metabolic circuits, cellular-signaling circuits, etc., involves complex interactions among a large number of units (neurons, receptors). The combinatorially large number of states such a system can exist in makes it impossible to study these systems from the first principles, starting from the interactions between the basic units. The principles of collective information processing on such complex networks can be identified using coarse graining approaches. This could provide insights into the organization and function of complex biological networks. Here I study models of biological networks using continuum dynamics, renormalization, maximum likelihood estimation and information theory. Such coarse graining approaches identify features that are essential for certain processes performed by underlying biological networks. We find that long-range connections in the brain allow for global scale feature detection in a signal. These also suppress the noise and remove any gaps present in the signal. Hierarchical organization with long-range connections leads to large-scale connectivity at low synapse numbers. Time delays can be utilized to separate a mixture of signals with temporal scales. Our observations indicate that the rules in multivariate signal processing are quite different from traditional single unit signal processing.

  18. Principal process analysis of biological models.

    PubMed

    Casagranda, Stefano; Touzeau, Suzanne; Ropers, Delphine; Gouzé, Jean-Luc

    2018-06-14

    Understanding the dynamical behaviour of biological systems is challenged by their large number of components and interactions. While efforts have been made in this direction to reduce model complexity, they often prove insufficient to grasp which and when model processes play a crucial role. Answering these questions is fundamental to unravel the functioning of living organisms. We design a method for dealing with model complexity, based on the analysis of dynamical models by means of Principal Process Analysis. We apply the method to a well-known model of circadian rhythms in mammals. The knowledge of the system trajectories allows us to decompose the system dynamics into processes that are active or inactive with respect to a certain threshold value. Process activities are graphically represented by Boolean and Dynamical Process Maps. We detect model processes that are always inactive, or inactive on some time interval. Eliminating these processes reduces the complex dynamics of the original model to the much simpler dynamics of the core processes, in a succession of sub-models that are easier to analyse. We quantify by means of global relative errors the extent to which the simplified models reproduce the main features of the original system dynamics and apply global sensitivity analysis to test the influence of model parameters on the errors. The results obtained prove the robustness of the method. The analysis of the sub-model dynamics allows us to identify the source of circadian oscillations. We find that the negative feedback loop involving proteins PER, CRY, CLOCK-BMAL1 is the main oscillator, in agreement with previous modelling and experimental studies. In conclusion, Principal Process Analysis is a simple-to-use method, which constitutes an additional and useful tool for analysing the complex dynamical behaviour of biological systems.

  19. [Patient safety: Glossary].

    PubMed

    Sabio Paz, Verónica; Panattieri, Néstor D; Cristina Godio, Farmacéutica; Ratto, María E; Arpí, Lucrecia; Dackiewicz, Nora

    2015-10-01

    Patient safety and quality of care has become a challenge for health systems. Health care is an increasingly complex and risky activity, as it represents a combination of human, technological and organizational processes. It is necessary, therefore, to take effective actions to reduce the adverse events and mitigate its impact. This glossary is a local adaptation of key terms and concepts from the international bibliographic sources. The aim is providing a common language for assessing patient safety processes and compare them.

  20. Method and system for knowledge discovery using non-linear statistical analysis and a 1st and 2nd tier computer program

    DOEpatents

    Hively, Lee M [Philadelphia, TN

    2011-07-12

    The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.

  1. Scientific and technical complex for modeling, researching and testing of rocket-space vehicles’ electric power installations

    NASA Astrophysics Data System (ADS)

    Bezruchko, Konstantin; Davidov, Albert

    2009-01-01

    In the given article scientific and technical complex for modeling, researching and testing of rocket-space vehicles' power installations which was created in Power Source Laboratory of National Aerospace University "KhAI" is described. This scientific and technical complex gives the opportunity to replace the full-sized tests on model tests and to reduce financial and temporary inputs at modeling, researching and testing of rocket-space vehicles' power installations. Using the given complex it is possible to solve the problems of designing and researching of rocket-space vehicles' power installations efficiently, and also to provide experimental researches of physical processes and tests of solar and chemical batteries of rocket-space complexes and space vehicles. Scientific and technical complex also allows providing accelerated tests, diagnostics, life-time control and restoring of chemical accumulators for rocket-space vehicles' power supply systems.

  2. BROADBAND RADIO POLARIMETRY AND FARADAY ROTATION OF 563 EXTRAGALACTIC RADIO SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, C. S.; Gaensler, B. M.; Feain, I. J.

    2015-12-10

    We present a broadband spectropolarimetric survey of 563 discrete, mostly unresolved radio sources between 1.3 and 2.0 GHz using data taken with the Australia Telescope Compact Array. We have used rotation-measure synthesis to identify Faraday-complex polarized sources, those objects whose frequency-dependent polarization behavior indicates the presence of material possessing complicated magnetoionic structure along the line of sight (LOS). For sources classified as Faraday-complex, we have analyzed a number of their radio and multiwavelength properties to determine whether they differ from Faraday-simple polarized sources (sources for which LOS magnetoionic structures are comparatively simple) in these properties. We use this information tomore » constrain the physical nature of the magnetoionic structures responsible for generating the observed complexity. We detect Faraday complexity in 12% of polarized sources at ∼1′ resolution, but we demonstrate that underlying signal-to-noise limitations mean the true percentage is likely to be significantly higher in the polarized radio source population. We find that the properties of Faraday-complex objects are diverse, but that complexity is most often associated with depolarization of extended radio sources possessing a relatively steep total intensity spectrum. We find an association between Faraday complexity and LOS structure in the Galactic interstellar medium (ISM) and claim that a significant proportion of the Faraday complexity we observe may be generated at interfaces of the ISM associated with ionization fronts near neutral hydrogen structures. Galaxy cluster environments and internally generated Faraday complexity provide possible alternative explanations in some cases.« less

  3. An Investigation of Spatial Hearing in Children with Normal Hearing and with Cochlear Implants and the Impact of Executive Function

    NASA Astrophysics Data System (ADS)

    Misurelli, Sara M.

    The ability to analyze an "auditory scene"---that is, to selectively attend to a target source while simultaneously segregating and ignoring distracting information---is one of the most important and complex skills utilized by normal hearing (NH) adults. The NH adult auditory system and brain work rather well to segregate auditory sources in adverse environments. However, for some children and individuals with hearing loss, selectively attending to one source in noisy environments can be extremely challenging. In a normal auditory system, information arriving at each ear is integrated, and thus these binaural cues aid in speech understanding in noise. A growing number of individuals who are deaf now receive cochlear implants (CIs), which supply hearing through electrical stimulation to the auditory nerve. In particular, bilateral cochlear implants (BICIs) are now becoming more prevalent, especially in children. However, because CI sound processing lacks both fine structure cues and coordination between stimulation at the two ears, binaural cues may either be absent or inconsistent. For children with NH and with BiCIs, this difficulty in segregating sources is of particular concern because their learning and development commonly occurs within the context of complex auditory environments. This dissertation intends to explore and understand the ability of children with NH and with BiCIs to function in everyday noisy environments. The goals of this work are to (1) Investigate source segregation abilities in children with NH and with BiCIs; (2) Examine the effect of target-interferer similarity and the benefits of source segregation for children with NH and with BiCIs; (3) Investigate measures of executive function that may predict performance in complex and realistic auditory tasks of source segregation for listeners with NH; and (4) Examine source segregation abilities in NH listeners, from school-age to adults.

  4. fMRI activation patterns in an analytic reasoning task: consistency with EEG source localization

    NASA Astrophysics Data System (ADS)

    Li, Bian; Vasanta, Kalyana C.; O'Boyle, Michael; Baker, Mary C.; Nutter, Brian; Mitra, Sunanda

    2010-03-01

    Functional magnetic resonance imaging (fMRI) is used to model brain activation patterns associated with various perceptual and cognitive processes as reflected by the hemodynamic (BOLD) response. While many sensory and motor tasks are associated with relatively simple activation patterns in localized regions, higher-order cognitive tasks may produce activity in many different brain areas involving complex neural circuitry. We applied a recently proposed probabilistic independent component analysis technique (PICA) to determine the true dimensionality of the fMRI data and used EEG localization to identify the common activated patterns (mapped as Brodmann areas) associated with a complex cognitive task like analytic reasoning. Our preliminary study suggests that a hybrid GLM/PICA analysis may reveal additional regions of activation (beyond simple GLM) that are consistent with electroencephalography (EEG) source localization patterns.

  5. Lee-Wick black holes

    NASA Astrophysics Data System (ADS)

    Bambi, Cosimo; Modesto, Leonardo; Wang, Yixu

    2017-01-01

    We derive and study an approximate static vacuum solution generated by a point-like source in a higher derivative gravitational theory with a pair of complex conjugate ghosts. The gravitational theory is local and characterized by a high derivative operator compatible with Lee-Wick unitarity. In particular, the tree-level two-point function only shows a pair of complex conjugate poles besides the massless spin two graviton. We show that singularity-free black holes exist when the mass of the source M exceeds a critical value Mcrit. For M >Mcrit the spacetime structure is characterized by an outer event horizon and an inner Cauchy horizon, while for M =Mcrit we have an extremal black hole with vanishing Hawking temperature. The evaporation process leads to a remnant that approaches the zero-temperature extremal black hole state in an infinite amount of time.

  6. Turning a remotely controllable observatory into a fully autonomous system

    NASA Astrophysics Data System (ADS)

    Swindell, Scott; Johnson, Chris; Gabor, Paul; Zareba, Grzegorz; Kubánek, Petr; Prouza, Michael

    2014-08-01

    We describe a complex process needed to turn an existing, old, operational observatory - The Steward Observatory's 61" Kuiper Telescope - into a fully autonomous system, which observers without an observer. For this purpose, we employed RTS2,1 an open sourced, Linux based observatory control system, together with other open sourced programs and tools (GNU compilers, Python language for scripting, JQuery UI for Web user interface). This presentation provides a guide with time estimates needed for a newcomers to the field to handle such challenging tasks, as fully autonomous observatory operations.

  7. Modelling fully-coupled Thermo-Hydro-Mechanical (THM) processes in fractured reservoirs using GOLEM: a massively parallel open-source simulator

    NASA Astrophysics Data System (ADS)

    Jacquey, Antoine; Cacace, Mauro

    2017-04-01

    Utilization of the underground for energy-related purposes have received increasing attention in the last decades as a source for carbon-free energy and for safe storage solutions. Understanding the key processes controlling fluid and heat flow around geological discontinuities such as faults and fractures as well as their mechanical behaviours is therefore of interest in order to design safe and sustainable reservoir operations. These processes occur in a naturally complex geological setting, comprising natural or engineered discrete heterogeneities as faults and fractures, span a relatively large spectrum of temporal and spatial scales and they interact in a highly non-linear fashion. In this regard, numerical simulators have become necessary in geological studies to model coupled processes and complex geological geometries. In this study, we present a new simulator GOLEM, using multiphysics coupling to characterize geological reservoirs. In particular, special attention is given to discrete geological features such as faults and fractures. GOLEM is based on the Multiphysics Object-Oriented Simulation Environment (MOOSE). The MOOSE framework provides a powerful and flexible platform to solve multiphysics problems implicitly and in a tightly coupled manner on unstructured meshes which is of interest for the considered non-linear context. Governing equations in 3D for fluid flow, heat transfer (conductive and advective), saline transport as well as deformation (elastic and plastic) have been implemented into the GOLEM application. Coupling between rock deformation and fluid and heat flow is considered using theories of poroelasticity and thermoelasticity. Furthermore, considering material properties such as density and viscosity and transport properties such as porosity as dependent on the state variables (based on the International Association for the Properties of Water and Steam models) increase the coupling complexity of the problem. The GOLEM application aims therefore at integrating more physical processes observed in the field or in the laboratory to simulate more realistic scenarios. The use of high-level nonlinear solver technology allow us to tackle these complex multiphysics problems in three dimensions. Basic concepts behing the GOLEM simulator will be presented in this study as well as a few application examples to illustrate its main features.

  8. A Comparison of Mathematical Models of Fish Mercury Concentration as a Function of Atmospheric Mercury Deposition Rate and Watershed Characteristics

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Moore, R. B.; Shanley, J. B.; Miller, E. K.; Kamman, N. C.; Nacci, D.

    2009-12-01

    Mercury (Hg) concentrations in fish and aquatic wildlife are complex functions of atmospheric Hg deposition rate, terrestrial and aquatic watershed characteristics that influence Hg methylation and export, and food chain characteristics determining Hg bioaccumulation. Because of the complexity and incomplete understanding of these processes, regional-scale models of fish tissue Hg concentration are necessarily empirical in nature, typically constructed through regression analysis of fish tissue Hg concentration data from many sampling locations on a set of potential explanatory variables. Unless the data sets are unusually long and show clear time trends, the empirical basis for model building must be based solely on spatial correlation. Predictive regional scale models are highly useful for improving understanding of the relevant biogeochemical processes, as well as for practical fish and wildlife management and human health protection. Mechanistically, the logical arrangement of explanatory variables is to multiply each of the individual Hg source terms (e.g. dry, wet, and gaseous deposition rates, and residual watershed Hg) for a given fish sampling location by source-specific terms pertaining to methylation, watershed transport, and biological uptake for that location (e.g. SO4 availability, hill slope, lake size). This mathematical form has the desirable property that predicted tissue concentration will approach zero as all individual source terms approach zero. One complication with this form, however, is that it is inconsistent with the standard linear multiple regression equation in which all terms (including those for sources and physical conditions) are additive. An important practical disadvantage of a model in which the Hg source terms are additive (rather than multiplicative) with their modifying factors is that predicted concentration is not zero when all sources are zero, making it unreliable for predicting the effects of large future reductions in Hg deposition. In this paper we compare the results of using several different linear and non-linear models in an analysis of watershed and fish Hg data for 450 New England lakes. The differences in model results pertain to both their utility in interpreting methylation and export processes as well as in fisheries management.

  9. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    NASA Astrophysics Data System (ADS)

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  10. Free Energy and Virtual Reality in Neuroscience and Psychoanalysis: A Complexity Theory of Dreaming and Mental Disorder.

    PubMed

    Hopkins, Jim

    2016-01-01

    The main concepts of the free energy (FE) neuroscience developed by Karl Friston and colleagues parallel those of Freud's Project for a Scientific Psychology. In Hobson et al. (2014) these include an innate virtual reality generator that produces the fictive prior beliefs that Freud described as the primary process. This enables Friston's account to encompass a unified treatment-a complexity theory-of the role of virtual reality in both dreaming and mental disorder. In both accounts the brain operates to minimize FE aroused by sensory impingements-including interoceptive impingements that report compliance with biological imperatives-and constructs a representation/model of the causes of impingement that enables this minimization. In Friston's account (variational) FE equals complexity minus accuracy, and is minimized by increasing accuracy and decreasing complexity. Roughly the brain (or model) increases accuracy together with complexity in waking. This is mediated by consciousness-creating active inference-by which it explains sensory impingements in terms of perceptual experiences of their causes. In sleep it reduces complexity by processes that include both synaptic pruning and consciousness/virtual reality/dreaming in REM. The consciousness-creating active inference that effects complexity-reduction in REM dreaming must operate on FE-arousing data distinct from sensory impingement. The most relevant source is remembered arousals of emotion, both recent and remote, as processed in SWS and REM on "active systems" accounts of memory consolidation/reconsolidation. Freud describes these remembered arousals as condensed in the dreamwork for use in the conscious contents of dreams, and similar condensation can be seen in symptoms. Complexity partly reflects emotional conflict and trauma. This indicates that dreams and symptoms are both produced to reduce complexity in the form of potentially adverse (traumatic or conflicting) arousals of amygdala-related emotions. Mental disorder is thus caused by computational complexity together with mechanisms like synaptic pruning that have evolved for complexity-reduction; and important features of disorder can be understood in these terms. Details of the consilience among Freudian, systems consolidation, and complexity-reduction accounts appear clearly in the analysis of a single fragment of a dream, indicating also how complexity reduction proceeds by a process resembling Bayesian model selection.

  11. A role for relaxed selection in the evolution of the language capacity

    PubMed Central

    Deacon, Terrence W.

    2010-01-01

    Explaining the extravagant complexity of the human language and our competence to acquire it has long posed challenges for natural selection theory. To answer his critics, Darwin turned to sexual selection to account for the extreme development of language. Many contemporary evolutionary theorists have invoked incredibly lucky mutation or some variant of the assimilation of acquired behaviors to innate predispositions in an effort to explain it. Recent evodevo approaches have identified developmental processes that help to explain how complex functional synergies can evolve by Darwinian means. Interestingly, many of these developmental mechanisms bear a resemblance to aspects of Darwin's mechanism of natural selection, often differing only in one respect (e.g., form of duplication, kind of variation, competition/cooperation). A common feature is an interplay between processes of stabilizing selection and processes of relaxed selection at different levels of organism function. These may play important roles in the many levels of evolutionary process contributing to language. Surprisingly, the relaxation of selection at the organism level may have been a source of many complex synergistic features of the human language capacity, and may help explain why so much language information is “inherited” socially. PMID:20445088

  12. Improved assemblies using a source-agnostic pipeline for MetaGenomic Assembly by Merging (MeGAMerge) of contigs

    DOE PAGES

    Scholz, Matthew; Lo, Chien -Chi; Chain, Patrick S. G.

    2014-10-01

    Assembly of metagenomic samples is a very complex process, with algorithms designed to address sequencing platform-specific issues, (read length, data volume, and/or community complexity), while also faced with genomes that differ greatly in nucleotide compositional biases and in abundance. To address these issues, we have developed a post-assembly process: MetaGenomic Assembly by Merging (MeGAMerge). We compare this process to the performance of several assemblers, using both real, and in-silico generated samples of different community composition and complexity. MeGAMerge consistently outperforms individual assembly methods, producing larger contigs with an increased number of predicted genes, without replication of data. MeGAMerge contigs aremore » supported by read mapping and contig alignment data, when using synthetically-derived and real metagenomic data, as well as by gene prediction analyses and similarity searches. Ultimately, MeGAMerge is a flexible method that generates improved metagenome assemblies, with the ability to accommodate upcoming sequencing platforms, as well as present and future assembly algorithms.« less

  13. Towards a robust framework for Probabilistic Tsunami Hazard Assessment (PTHA) for local and regional tsunami in New Zealand

    NASA Astrophysics Data System (ADS)

    Mueller, Christof; Power, William; Fraser, Stuart; Wang, Xiaoming

    2013-04-01

    Probabilistic Tsunami Hazard Assessment (PTHA) is conceptually closely related to Probabilistic Seismic Hazard Assessment (PSHA). The main difference is that PTHA needs to simulate propagation of tsunami waves through the ocean and cannot rely on attenuation relationships, which makes PTHA computationally more expensive. The wave propagation process can be assumed to be linear as long as water depth is much larger than the wave amplitude of the tsunami. Beyond this limit a non-linear scheme has to be employed with significantly higher algorithmic run times. PTHA considering far-field tsunami sources typically uses unit source simulations, and relies on the linearity of the process by later scaling and combining the wave fields of individual simulations to represent the intended earthquake magnitude and rupture area. Probabilistic assessments are typically made for locations offshore but close to the coast. Inundation is calculated only for significantly contributing events (de-aggregation). For local and regional tsunami it has been demonstrated that earthquake rupture complexity has a significant effect on the tsunami amplitude distribution offshore and also on inundation. In this case PTHA has to take variable slip distributions and non-linearity into account. A unit source approach cannot easily be applied. Rupture complexity is seen as an aleatory uncertainty and can be incorporated directly into the rate calculation. We have developed a framework that manages the large number of simulations required for local PTHA. As an initial case study the effect of rupture complexity on tsunami inundation and the statistics of the distribution of wave heights have been investigated for plate-interface earthquakes in the Hawke's Bay region in New Zealand. Assessing the probability that water levels will be in excess of a certain threshold requires the calculation of empirical cumulative distribution functions (ECDF). We compare our results with traditional estimates for tsunami inundation simulations that do not consider rupture complexity. De-aggregation based on moment magnitude alone might not be appropriate, because the hazard posed by any individual event can be underestimated locally if rupture complexity is ignored.

  14. Complex Signal Kurtosis and Independent Component Analysis for Wideband Radio Frequency Interference Detection

    NASA Technical Reports Server (NTRS)

    Schoenwald, Adam; Mohammed, Priscilla; Bradley, Damon; Piepmeier, Jeffrey; Wong, Englin; Gholian, Armen

    2016-01-01

    Radio-frequency interference (RFI) has negatively implicated scientific measurements across a wide variation passive remote sensing satellites. This has been observed in the L-band radiometers SMOS, Aquarius and more recently, SMAP [1, 2]. RFI has also been observed at higher frequencies such as K band [3]. Improvements in technology have allowed wider bandwidth digital back ends for passive microwave radiometry. A complex signal kurtosis radio frequency interference detector was developed to help identify corrupted measurements [4]. This work explores the use of ICA (Independent Component Analysis) as a blind source separation technique to pre-process radiometric signals for use with the previously developed real and complex signal kurtosis detectors.

  15. Characterization and Remediation of Contaminated Sites:Modeling, Measurement and Assessment

    NASA Astrophysics Data System (ADS)

    Basu, N. B.; Rao, P. C.; Poyer, I. C.; Christ, J. A.; Zhang, C. Y.; Jawitz, J. W.; Werth, C. J.; Annable, M. D.; Hatfield, K.

    2008-05-01

    The complexity of natural systems makes it impossible to estimate parameters at the required level of spatial and temporal detail. Thus, it becomes necessary to transition from spatially distributed parameters to spatially integrated parameters that are capable of adequately capturing the system dynamics, without always accounting for local process behavior. Contaminant flux across the source control plane is proposed as an integrated metric that captures source behavior and links it to plume dynamics. Contaminant fluxes were measured using an innovative technology, the passive flux meter at field sites contaminated with dense non-aqueous phase liquids or DNAPLs in the US and Australia. Flux distributions were observed to be positively or negatively correlated with the conductivity distribution, depending on the source characteristics of the site. The impact of partial source depletion on the mean contaminant flux and flux architecture was investigated in three-dimensional complex heterogeneous settings using the multiphase transport code UTCHEM and the reactive transport code ISCO3D. Source mass depletion reduced the mean contaminant flux approximately linearly, while the contaminant flux standard deviation reduced proportionally with the mean (i.e., coefficient of variation of flux distribution is constant with time). Similar analysis was performed using data from field sites, and the results confirmed the numerical simulations. The linearity of the mass depletion-flux reduction relationship indicates the ability to design remediation systems that deplete mass to achieve target reduction in source strength. Stability of the flux distribution indicates the ability to characterize the distributions in time once the initial distribution is known. Lagrangian techniques were used to predict contaminant flux behavior during source depletion in terms of the statistics of the hydrodynamic and DNAPL distribution. The advantage of the Lagrangian techniques lies in their small computation time and their inclusion of spatially integrated parameters that can be measured in the field using tracer tests. Analytical models that couple source depletion to plume transport were used for optimization of source and plume treatment. These models are being used for the development of decision and management tools (for DNAPL sites) that consider uncertainty assessments as an integral part of the decision-making process for contaminated site remediation.

  16. Reasons for Demotivation across Years of Study: Voices from Iranian English Major Students

    ERIC Educational Resources Information Center

    Hassaskhah, Jaleh; Mahdavi Zafarghandi, Amir; Fazeli, Maryam

    2015-01-01

    Language learning failure is often directly related to demotivation. The purpose of this study is to examine the process of demotivation and identify its sources within four years of an undergraduate degree programme. To this end, based on the complex dynamic systems perspective of the dynamic systems theories (DSTs), the demotivation test battery…

  17. Expression analysis of genes associated with sucrose accumulation in sugarcane (Saccharum spp. hybrids) varieties differing in content and time of peak sucrose storage

    USDA-ARS?s Scientific Manuscript database

    Sucrose synthesis/accumulation in sugarcane is a complex process involving many genes and regulatory sequences that control biochemical events in source-sink tissues. Among these, sucrose synthase (SuSy), sucrose-phosphate synthase (SPS), soluble acid (SAI) and cell-wall invertase (CWI) are importan...

  18. The Endurance of Children's Working Memory: A Recall Time Analysis

    ERIC Educational Resources Information Center

    Towse, John N.; Hitch, Graham J.; Hamilton, Z.; Pirrie, Sarah

    2008-01-01

    We analyze the timing of recall as a source of information about children's performance in complex working memory tasks. A group of 8-year-olds performed a traditional operation span task in which sequence length increased across trials and an operation period task in which processing requirements were extended across trials of constant sequence…

  19. A Primer for Accounting Certification: Complete Analysis of the Process with Listing of Sources

    ERIC Educational Resources Information Center

    Boyd, David T.; Boyd, Sanithia C.; Berry, Priscilla

    2009-01-01

    As a result of globalization and the growth and complexity of both domestic and international bodies requiring accountants, the need for highly sophisticated training and specific certification is mandatory. Students seeking career positions in the field of accounting are amazingly left without the easy access to certification that one might think…

  20. Volcanic deformation of Atosanupuri volcanic complex in the Kussharo caldera, Japan, from 1993 to 2016 revealed by JERS-1, ALOS, and ALOS-2 radar interferometry

    NASA Astrophysics Data System (ADS)

    Fujiwara, Satoshi; Murakami, Makoto; Nishimura, Takuya; Tobita, Mikio; Yarai, Hiroshi; Kobayashi, Tomokazu

    2017-06-01

    A series of uplifts and subsidences of a volcanic complex in the Kussharo caldera in eastern Hokkaido (Japan) has been revealed by interferometric analysis using archived satellite synthetic aperture radar data. A time series of interferograms from 1993 to 1998 showed the temporal evolution of a ground deformation process. The horizontal dimension of the deformation field was about 10 km in diameter, and the maximum amplitude of the deformation was >20 cm. Uplift started in 1994, and concurrent earthquake swarm activity was observed around the uplift area; however, no other phenomena were observed during this period. A subsidence process then followed, with the shape of the deformation forming a mirror image of the uplift. Model simulations suggest deformation was caused by a source at the depth of about 6 km and that the position of the source remained static throughout the episode. Subsidence of the volcanic complex was also observed by another satellite from 2007 to 2010, and likely continued for more than 10 years. In addition to the main uplift-subsidence sequence, small deformation patterns with short spatial wavelengths were observed at the center of the deforming area. Data from three satellites recorded small-scale subsidence of the Atosanupuri and Rishiri lava domes at a constant rate of approx. 1 cm/year from 1993 to 2016.[Figure not available: see fulltext.

  1. Unifying cancer and normal RNA sequencing data from different sources

    PubMed Central

    Wang, Qingguo; Armenia, Joshua; Zhang, Chao; Penson, Alexander V.; Reznik, Ed; Zhang, Liguo; Minet, Thais; Ochoa, Angelica; Gross, Benjamin E.; Iacobuzio-Donahue, Christine A.; Betel, Doron; Taylor, Barry S.; Gao, Jianjiong; Schultz, Nikolaus

    2018-01-01

    Driven by the recent advances of next generation sequencing (NGS) technologies and an urgent need to decode complex human diseases, a multitude of large-scale studies were conducted recently that have resulted in an unprecedented volume of whole transcriptome sequencing (RNA-seq) data, such as the Genotype Tissue Expression project (GTEx) and The Cancer Genome Atlas (TCGA). While these data offer new opportunities to identify the mechanisms underlying disease, the comparison of data from different sources remains challenging, due to differences in sample and data processing. Here, we developed a pipeline that processes and unifies RNA-seq data from different studies, which includes uniform realignment, gene expression quantification, and batch effect removal. We find that uniform alignment and quantification is not sufficient when combining RNA-seq data from different sources and that the removal of other batch effects is essential to facilitate data comparison. We have processed data from GTEx and TCGA and successfully corrected for study-specific biases, enabling comparative analysis between TCGA and GTEx. The normalized datasets are available for download on figshare. PMID:29664468

  2. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  3. Wave field synthesis of moving virtual sound sources with complex radiation properties.

    PubMed

    Ahrens, Jens; Spors, Sascha

    2011-11-01

    An approach to the synthesis of moving virtual sound sources with complex radiation properties in wave field synthesis is presented. The approach exploits the fact that any stationary sound source of finite spatial extent radiates spherical waves at sufficient distance. The angular dependency of the radiation properties of the source under consideration is reflected by the amplitude and phase distribution on the spherical wave fronts. The sound field emitted by a uniformly moving monopole source is derived and the far-field radiation properties of the complex virtual source under consideration are incorporated in order to derive a closed-form expression for the loudspeaker driving signal. The results are illustrated via numerical simulations of the synthesis of the sound field of a sample moving complex virtual source.

  4. Complex earthquake rupture and local tsunamis

    USGS Publications Warehouse

    Geist, E.L.

    2002-01-01

    In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a factor of 3 or more. These results indicate that there is substantially more variation in the local tsunami wave field derived from the inherent complexity subduction zone earthquakes than predicted by a simple elastic dislocation model. Probabilistic methods that take into account variability in earthquake rupture processes are likely to yield more accurate assessments of tsunami hazards.

  5. Sparsity-promoting inversion for modeling of irregular volcanic deformation source

    NASA Astrophysics Data System (ADS)

    Zhai, G.; Shirzaei, M.

    2016-12-01

    Kīlauea volcano, Hawaíi Island, has a complex magmatic system. Nonetheless, kinematic models of the summit reservoir have so far been limited to first-order analytical solutions with pre-determined geometry. To investigate the complex geometry and kinematics of the summit reservoir, we apply a multitrack multitemporal wavelet-based InSAR (Interferometric Synthetic Aperture Radar) algorithm and a geometry-free time-dependent modeling scheme considering a superposition of point centers of dilatation (PCDs). Applying Principal Component Analysis (PCA) to the time-dependent source model, six spatially independent deformation zones (i.e., reservoirs) are identified, whose locations are consistent with previous studies. Time-dependence of the model allows also identifying periods of correlated or anti-correlated behaviors between reservoirs. Hence, we suggest that likely the reservoir are connected and form a complex magmatic reservoir [Zhai and Shirzaei, 2016]. To obtain a physically-meaningful representation of the complex reservoir, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations (i.e., outliers in background crust). The major steps include inverting surface deformation data using a hybrid L-1 and L-2 norm regularization approach to solve for sparse volume change distribution and then implementing a BEM based method to solve for opening distribution on a triangular mesh representing the complex reservoir. Using this approach, we are able to constrain the internal excess pressure of magma body with irregular geometry, satisfying uniformly pressurized boundary condition on the surface of magma chamber. The inversion method with sparsity constraint is tested using five synthetic source geometries, including torus, prolate ellipsoid, and sphere as well as horizontal and vertical L-shape bodies. The results show that source dimension, depth and shape are well recovered. Afterward, we apply this modeling scheme to deformation observed at Kilauea summit to constrain the magmatic source geometry, and revise the kinematics of Kilauea's shallow plumbing system. Such a model is valuable for understanding the physical processes in a magmatic reservoir and the method can readily be applied to other volcanic settings.

  6. TRACC: An open source software for processing sap flux data from thermal dissipation probes

    DOE PAGES

    Ward, Eric J.; Domec, Jean-Christophe; King, John; ...

    2017-05-02

    Here, thermal dissipation probes (TDPs) have become a widely used method of monitoring plant water use in recent years. The use of TDPs requires calibration to a theoretical zero-flow value (ΔT0); usually based upon the assumption that at least some nighttime measurements represent zero-flow conditions. Fully automating the processing of data from TDPs is made exceedingly difficult due to errors arising from many sources. However, it is desirable to minimize variation arising from different researchers’ processing data, and thus, a common platform for processing data, including editing raw data and determination of ΔT0, is useful and increases the transparency andmore » replicability of TDP-based research. Here, we present the TDP data processing software TRACC (Thermal dissipation Review Assessment Cleaning and Conversion) to serve this purpose. TRACC is an open-source software written in the language R, using graphical presentation of data and on screen prompts with yes/no or simple numerical responses. It allows the user to select several important options, such as calibration coefficients and the exclusion of nights when vapor pressure deficit does not approach zero. Although it is designed for users with no coding experience, the outputs of TRACC could be easily incorporated into more complex models or software.« less

  7. TRACC: An open source software for processing sap flux data from thermal dissipation probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Eric J.; Domec, Jean-Christophe; King, John

    Here, thermal dissipation probes (TDPs) have become a widely used method of monitoring plant water use in recent years. The use of TDPs requires calibration to a theoretical zero-flow value (ΔT0); usually based upon the assumption that at least some nighttime measurements represent zero-flow conditions. Fully automating the processing of data from TDPs is made exceedingly difficult due to errors arising from many sources. However, it is desirable to minimize variation arising from different researchers’ processing data, and thus, a common platform for processing data, including editing raw data and determination of ΔT0, is useful and increases the transparency andmore » replicability of TDP-based research. Here, we present the TDP data processing software TRACC (Thermal dissipation Review Assessment Cleaning and Conversion) to serve this purpose. TRACC is an open-source software written in the language R, using graphical presentation of data and on screen prompts with yes/no or simple numerical responses. It allows the user to select several important options, such as calibration coefficients and the exclusion of nights when vapor pressure deficit does not approach zero. Although it is designed for users with no coding experience, the outputs of TRACC could be easily incorporated into more complex models or software.« less

  8. A Protocol for Using Förster Resonance Energy Transfer (FRET)-force Biosensors to Measure Mechanical Forces across the Nuclear LINC Complex.

    PubMed

    Arsenovic, Paul T; Bathula, Kranthidhar; Conway, Daniel E

    2017-04-11

    The LINC complex has been hypothesized to be the critical structure that mediates the transfer of mechanical forces from the cytoskeleton to the nucleus. Nesprin-2G is a key component of the LINC complex that connects the actin cytoskeleton to membrane proteins (SUN domain proteins) in the perinuclear space. These membrane proteins connect to lamins inside the nucleus. Recently, a Förster Resonance Energy Transfer (FRET)-force probe was cloned into mini-Nesprin-2G (Nesprin-TS (tension sensor)) and used to measure tension across Nesprin-2G in live NIH3T3 fibroblasts. This paper describes the process of using Nesprin-TS to measure LINC complex forces in NIH3T3 fibroblasts. To extract FRET information from Nesprin-TS, an outline of how to spectrally unmix raw spectral images into acceptor and donor fluorescent channels is also presented. Using open-source software (ImageJ), images are pre-processed and transformed into ratiometric images. Finally, FRET data of Nesprin-TS is presented, along with strategies for how to compare data across different experimental groups.

  9. Biological reduction of chlorinated solvents: Batch-scale geochemical modeling

    NASA Astrophysics Data System (ADS)

    Kouznetsova, Irina; Mao, Xiaomin; Robinson, Clare; Barry, D. A.; Gerhard, Jason I.; McCarty, Perry L.

    2010-09-01

    Simulation of biodegradation of chlorinated solvents in dense non-aqueous phase liquid (DNAPL) source zones requires a model that accounts for the complexity of processes involved and that is consistent with available laboratory studies. This paper describes such a comprehensive modeling framework that includes microbially mediated degradation processes, microbial population growth and decay, geochemical reactions, as well as interphase mass transfer processes such as DNAPL dissolution, gas formation and mineral precipitation/dissolution. All these processes can be in equilibrium or kinetically controlled. A batch modeling example was presented where the degradation of trichloroethene (TCE) and its byproducts and concomitant reactions (e.g., electron donor fermentation, sulfate reduction, pH buffering by calcite dissolution) were simulated. Local and global sensitivity analysis techniques were applied to delineate the dominant model parameters and processes. Sensitivity analysis indicated that accurate values for parameters related to dichloroethene (DCE) and vinyl chloride (VC) degradation (i.e., DCE and VC maximum utilization rates, yield due to DCE utilization, decay rate for DCE/VC dechlorinators) are important for prediction of the overall dechlorination time. These parameters influence the maximum growth rate of the DCE and VC dechlorinating microorganisms and, thus, the time required for a small initial population to reach a sufficient concentration to significantly affect the overall rate of dechlorination. Self-inhibition of chlorinated ethenes at high concentrations and natural buffering provided by the sediment were also shown to significantly influence the dechlorination time. Furthermore, the analysis indicated that the rates of the competing, nonchlorinated electron-accepting processes relative to the dechlorination kinetics also affect the overall dechlorination time. Results demonstrated that the model developed is a flexible research tool that is able to provide valuable insight into the fundamental processes and their complex interactions during bioremediation of chlorinated ethenes in DNAPL source zones.

  10. Discrete Cu(i) complexes for azide-alkyne annulations of small molecules inside mammalian cells.

    PubMed

    Miguel-Ávila, Joan; Tomás-Gamasa, María; Olmos, Andrea; Pérez, Pedro J; Mascareñas, José L

    2018-02-21

    The archetype reaction of "click" chemistry, namely, the copper-promoted azide-alkyne cycloaddition (CuAAC), has found an impressive number of applications in biological chemistry. However, methods for promoting intermolecular annulations of exogenous, small azides and alkynes in the complex interior of mammalian cells, are essentially unknown. Herein we demonstrate that isolated, well-defined copper(i)-tris(triazolyl) complexes featuring designed ligands can readily enter mammalian cells and promote intracellular CuAAC annulations of small, freely diffusible molecules. In addition to simplifying protocols and avoiding the addition of "non-innocent" reductants, the use of these premade copper complexes leads to more efficient processes than with the alternative, in situ made copper species prepared from Cu(ii) sources, tris(triazole) ligands and sodium ascorbate. Under the reaction conditions, the well-defined copper complexes exhibit very good cell penetration properties, and do not present significant toxicities.

  11. What develops during emotional development? A component process approach to identifying sources of psychopathology risk in adolescence.

    PubMed

    McLaughlin, Katie A; Garrad, Megan C; Somerville, Leah H

    2015-12-01

    Adolescence is a phase of the lifespan associated with widespread changes in emotional behavior thought to reflect both changing environments and stressors, and psychological and neurobiological development. However, emotions themselves are complex phenomena that are composed of multiple subprocesses. In this paper, we argue that examining emotional development from a process-level perspective facilitates important insights into the mechanisms that underlie adolescents' shifting emotions and intensified risk for psychopathology. Contrasting the developmental progressions for the antecedents to emotion, physiological reactivity to emotion, emotional regulation capacity, and motivation to experience particular affective states reveals complex trajectories that intersect in a unique way during adolescence. We consider the implications of these intersecting trajectories for negative outcomes such as psychopathology, as well as positive outcomes for adolescent social bonds.

  12. [Parricide, abuse and emotional processes: a review starting from some paradigmatic cases].

    PubMed

    Grattagliano, I; Greco, R; Di Vella, G; Campobasso, C P; Corbi, G; Romanelli, M C; Petruzzelli, N; Ostuni, A; Brunetti, V; Cassibba, R

    2015-01-01

    The authors of this study tackle the complex subject of parricide, which is a rare and often brutal form of homicide. Parricide has a high emotional impact on public opinion and on our collective imagination, especially in light of the fact that the perpetrators are often minors.. Three striking cases of parricide, taken from various documented sources and judicial files from the "N. Fornelli" Juvenile Penal Institute (Bari, Italy), are presented here. A review of the literature on the topic has revealed differences between parricides committed by adults and those committed by minors. In the end, the complex issues underlying such an unusual crime are connected to abuses and maltreatment that minor perpetrators of parricide have suffered, especially the emotional processes that are activated.

  13. The Context, Process, and Outcome Evaluation Model for Organisational Health Interventions

    PubMed Central

    Fridrich, Annemarie; Jenny, Gregor J.; Bauer, Georg F.

    2015-01-01

    To facilitate evaluation of complex, organisational health interventions (OHIs), this paper aims at developing a context, process, and outcome (CPO) evaluation model. It builds on previous model developments in the field and advances them by clearly defining and relating generic evaluation categories for OHIs. Context is defined as the underlying frame that influences and is influenced by an OHI. It is further differentiated into the omnibus and discrete contexts. Process is differentiated into the implementation process, as the time-limited enactment of the original intervention plan, and the change process of individual and collective dynamics triggered by the implementation process. These processes lead to proximate, intermediate, and distal outcomes, as all results of the change process that are meaningful for various stakeholders. Research questions that might guide the evaluation of an OHI according to the CPO categories and a list of concrete themes/indicators and methods/sources applied within the evaluation of an OHI project at a hospital in Switzerland illustrate the model's applicability in structuring evaluations of complex OHIs. In conclusion, the model supplies a common language and a shared mental model for improving communication between researchers and company members and will improve the comparability and aggregation of evaluation study results. PMID:26557665

  14. The Context, Process, and Outcome Evaluation Model for Organisational Health Interventions.

    PubMed

    Fridrich, Annemarie; Jenny, Gregor J; Bauer, Georg F

    2015-01-01

    To facilitate evaluation of complex, organisational health interventions (OHIs), this paper aims at developing a context, process, and outcome (CPO) evaluation model. It builds on previous model developments in the field and advances them by clearly defining and relating generic evaluation categories for OHIs. Context is defined as the underlying frame that influences and is influenced by an OHI. It is further differentiated into the omnibus and discrete contexts. Process is differentiated into the implementation process, as the time-limited enactment of the original intervention plan, and the change process of individual and collective dynamics triggered by the implementation process. These processes lead to proximate, intermediate, and distal outcomes, as all results of the change process that are meaningful for various stakeholders. Research questions that might guide the evaluation of an OHI according to the CPO categories and a list of concrete themes/indicators and methods/sources applied within the evaluation of an OHI project at a hospital in Switzerland illustrate the model's applicability in structuring evaluations of complex OHIs. In conclusion, the model supplies a common language and a shared mental model for improving communication between researchers and company members and will improve the comparability and aggregation of evaluation study results.

  15. Use of a new Trichoderma harzianum strain isolated from the Amazon rainforest with pretreated sugar cane bagasse for on-site cellulase production.

    PubMed

    Delabona, Priscila da Silva; Farinas, Cristiane Sanchez; da Silva, Mateus Ribeiro; Azzoni, Sindelia Freitas; Pradella, José Geraldo da Cruz

    2012-03-01

    The on-site production of cellulases is an important strategy for the development of sustainable second-generation ethanol production processes. This study concerns the use of a specific cellulolytic enzyme complex for hydrolysis of pretreated sugar cane bagasse. Glycosyl hydrolases (FPase, xylanase, and β-glucosidase) were produced using a new strain of Trichoderma harzianum, isolated from the Amazon rainforest and cultivated under different conditions. The influence of the carbon source was first investigated using shake-flask cultures. Selected carbon sources were then further studied under different pH conditions using a stirred tank bioreactor. Enzymatic activities up to 121 FPU/g, 8000 IU/g, and 1730 IU/g of delignified steam-exploded bagasse+sucrose were achieved for cellulase, xylanase and β-glucosidase, respectively. This enzymatic complex was used to hydrolyze pretreated sugar cane bagasse. A comparative evaluation, using an enzymatic extract from Trichoderma reesei RUTC30, indicated similar performance of the T. harzianum enzyme complex, being a potential candidate for on-site production of enzymes. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Shutterless ion mobility spectrometer with fast pulsed electron source

    NASA Astrophysics Data System (ADS)

    Bunert, E.; Heptner, A.; Reinecke, T.; Kirk, A. T.; Zimmermann, S.

    2017-02-01

    Ion mobility spectrometers (IMS) are devices for fast and very sensitive trace gas analysis. The measuring principle is based on an initial ionization process of the target analyte. Most IMS employ radioactive electron sources, such as 63Ni or 3H. These radioactive materials have the disadvantage of legal restrictions and the electron emission has a predetermined intensity and cannot be controlled or disabled. In this work, we replaced the 3H source of our IMS with 100 mm drift tube length with our nonradioactive electron source, which generates comparable spectra to the 3H source. An advantage of our emission current controlled nonradioactive electron source is that it can operate in a fast pulsed mode with high electron intensities. By optimizing the geometric parameters and developing fast control electronics, we can achieve very short electron emission pulses for ionization with high intensities and an adjustable pulse width of down to a few nanoseconds. This results in small ion packets at simultaneously high ion densities, which are subsequently separated in the drift tube. Normally, the required small ion packet is generated by a complex ion shutter mechanism. By omitting the additional reaction chamber, the ion packet can be generated directly at the beginning of the drift tube by our pulsed nonradioactive electron source with only slight reduction in resolving power. Thus, the complex and costly shutter mechanism and its electronics can also be omitted, which leads to a simple low-cost IMS-system with a pulsed nonradioactive electron source and a resolving power of 90.

  17. The Evolution of Biological Complexity in Digital Organisms

    NASA Astrophysics Data System (ADS)

    Ofria, Charles

    2013-03-01

    When Darwin first proposed his theory of evolution by natural selection, he realized that it had a problem explaining the origins of traits of ``extreme perfection and complication'' such as the vertebrate eye. Critics of Darwin's theory have latched onto this perceived flaw as a proof that Darwinian evolution is impossible. In anticipation of this issue, Darwin described the perfect data needed to understand this process, but lamented that such data are ``scarcely ever possible'' to obtain. In this talk, I will discuss research where we use populations of digital organisms (self-replicating and evolving computer programs) to elucidate the genetic and evolutionary processes by which new, highly-complex traits arise, drawing inspiration directly from Darwin's wistful thinking and hypotheses. During the process of evolution in these fully-transparent computational environments we can measure the incorporation of new information into the genome, a process akin to a natural Maxwell's Demon, and identify the original source of any such information. We show that, as Darwin predicted, much of the information used to encode a complex trait was already in the genome as part of simpler evolved traits, and that many routes must be possible for a new complex trait to have a high probability of successfully evolving. In even more extreme examples of the evolution of complexity, we are now using these same principles to examine the evolutionary dynamics the drive major transitions in evolution; that is transitions to higher-levels of organization, which are some of the most complex evolutionary events to occur in nature. Finally, I will explore some of the implications of this research to other aspects of evolutionary biology and as well as ways that these evolutionary principles can be applied toward solving computational and engineering problems.

  18. The Prevailing Catalytic Role of Meteorites in Formamide Prebiotic Processes.

    PubMed

    Saladino, Raffaele; Botta, Lorenzo; Di Mauro, Ernesto

    2018-02-22

    Meteorites are consensually considered to be involved in the origin of life on this Planet for several functions and at different levels: (i) as providers of impact energy during their passage through the atmosphere; (ii) as agents of geodynamics, intended both as starters of the Earth's tectonics and as activators of local hydrothermal systems upon their fall; (iii) as sources of organic materials, at varying levels of limited complexity; and (iv) as catalysts. The consensus about the relevance of these functions differs. We focus on the catalytic activities of the various types of meteorites in reactions relevant for prebiotic chemistry. Formamide was selected as the chemical precursor and various sources of energy were analyzed. The results show that all the meteorites and all the different energy sources tested actively afford complex mixtures of biologically-relevant compounds, indicating the robustness of the formamide-based prebiotic chemistry involved. Although in some cases the yields of products are quite small, the diversity of the detected compounds of biochemical significance underlines the prebiotic importance of meteorite-catalyzed condensation of formamide.

  19. NeuroPG: open source software for optical pattern generation and data acquisition

    PubMed Central

    Avants, Benjamin W.; Murphy, Daniel B.; Dapello, Joel A.; Robinson, Jacob T.

    2015-01-01

    Patterned illumination using a digital micromirror device (DMD) is a powerful tool for optogenetics. Compared to a scanning laser, DMDs are inexpensive and can easily create complex illumination patterns. Combining these complex spatiotemporal illumination patterns with optogenetics allows DMD-equipped microscopes to probe neural circuits by selectively manipulating the activity of many individual cells or many subcellular regions at the same time. To use DMDs to study neural activity, scientists must develop specialized software to coordinate optical stimulation patterns with the acquisition of electrophysiological and fluorescence data. To meet this growing need we have developed an open source optical pattern generation software for neuroscience—NeuroPG—that combines, DMD control, sample visualization, and data acquisition in one application. Built on a MATLAB platform, NeuroPG can also process, analyze, and visualize data. The software is designed specifically for the Mightex Polygon400; however, as an open source package, NeuroPG can be modified to incorporate any data acquisition, imaging, or illumination equipment that is compatible with MATLAB’s Data Acquisition and Image Acquisition toolboxes. PMID:25784873

  20. Complex organic molecules in the Galactic Centre: the N-bearing family

    NASA Astrophysics Data System (ADS)

    Zeng, S.; Jiménez-Serra, I.; Rivilla, V. M.; Martín, S.; Martín-Pintado, J.; Requena-Torres, M. A.; Armijos-Abendaño, J.; Riquelme, D.; Aladro, R.

    2018-05-01

    We present an unbiased spectral line survey toward the Galactic Centre (GC) quiescent giant molecular cloud (QGMC), G+0.693 using the GBT and IRAM 30 telescopes. Our study highlights an extremely rich organic inventory of abundant amounts of nitrogen (N)-bearing species in a source without signatures of star formation. We report the detection of 17 N-bearing species in this source, of which 8 are complex organic molecules (COMs). A comparison of the derived abundances relative to H2 is made across various galactic and extragalactic environments. We conclude that the unique chemistry in this source is likely to be dominated by low-velocity shocks with X-rays/cosmic rays also playing an important role in the chemistry. Like previous findings obtained for O-bearing molecules, our results for N-bearing species suggest a more efficient hydrogenation of these species on dust grains in G+0.693 than in hot cores in the Galactic disk, as a consequence of the low dust temperatures coupled with energetic processing by X-ray/cosmic ray radiation in the GC.

  1. Swept-frequency feedback interferometry using terahertz frequency QCLs: a method for imaging and materials analysis.

    PubMed

    Rakić, Aleksandar D; Taimre, Thomas; Bertling, Karl; Lim, Yah Leng; Dean, Paul; Indjin, Dragan; Ikonić, Zoran; Harrison, Paul; Valavanis, Alexander; Khanna, Suraj P; Lachab, Mohammad; Wilson, Stephen J; Linfield, Edmund H; Davies, A Giles

    2013-09-23

    The terahertz (THz) frequency quantum cascade laser (QCL) is a compact source of high-power radiation with a narrow intrinsic linewidth. As such, THz QCLs are extremely promising sources for applications including high-resolution spectroscopy, heterodyne detection, and coherent imaging. We exploit the remarkable phase-stability of THz QCLs to create a coherent swept-frequency delayed self-homodyning method for both imaging and materials analysis, using laser feedback interferometry. Using our scheme we obtain amplitude-like and phase-like images with minimal signal processing. We determine the physical relationship between the operating parameters of the laser under feedback and the complex refractive index of the target and demonstrate that this coherent detection method enables extraction of complex refractive indices with high accuracy. This establishes an ultimately compact and easy-to-implement THz imaging and materials analysis system, in which the local oscillator, mixer, and detector are all combined into a single laser.

  2. Organic Matter Remineralization Predominates Phosphorus Cycling in the Mid-Bay Sediments in the Chesapeake Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunendra, Joshi R.; Kukkadapu, Ravi K.; Burdige, David J.

    2015-05-19

    The Chesapeake Bay, the largest and most productive estuary in the US, suffers from varying degrees of water quality issues fueled by both point and non–point source nutrient sources. Restoration of the bay is complicated by the multitude of nutrient sources, their variable inputs and hydrological conditions, and complex interacting factors including climate forcing. These complexities not only restrict formulation of effective restoration plans but also open up debates on accountability issues with nutrient loading. A detailed understanding of sediment phosphorus (P) dynamics enables one to identify the exchange of dissolved constituents across the sediment- water interface and aid tomore » better constrain mechanisms and processes controlling the coupling between the sediments and the overlying waters. Here we used phosphate oxygen isotope ratios (δ18Op) in concert with sediment chemistry, XRD, and Mössbauer spectroscopy on the sediment retrieved from an organic rich, sulfidic site in the meso-haline portion of the mid-bay to identify sources and pathway of sedimentary P cycling and to infer potential feedback effect on bottom water hypoxia and surface water eutrophication. Isotope data indicate that the regeneration of inorganic P from organic matter degradation (remineralization) is the predominant, if not sole, pathway for authigenic P precipitation in the mid-bay sediments. We interpret that the excess inorganic P generated by remineralization should have overwhelmed any bottom-water and/or pore-water P derived from other sources or biogeochemical processes and exceeded saturation with respect to authigenic P precipitation. It is the first research that identifies the predominance of remineralization pathway against remobilization (coupled Fe-P cycling) pathway in the Chesapeake Bay. Therefore, these results are expected to have significant implications for the current understanding of P cycling and benthic-pelagic coupling in the bay, particularly on the source and pathway of P that sustains hypoxia and supports phytoplankton growth in the surface water.« less

  3. The source of infrasound associated with long-period events at mount St. Helens

    USGS Publications Warehouse

    Matoza, R.S.; Garces, M.A.; Chouet, B.A.; D'Auria, L.; Hedlin, M.A.H.; De Groot-Hedlin, C.; Waite, G.P.

    2009-01-01

    During the early stages of the 2004-2008 Mount St. Helens eruption, the source process that produced a sustained sequence of repetitive long-period (LP) seismic events also produced impulsive broadband infrasonic signals in the atmosphere. To assess whether the signals could be generated simply by seismic-acoustic coupling from the shallow LP events, we perform finite difference simulation of the seismo-acoustic wavefield using a single numerical scheme for the elastic ground and atmosphere. The effects of topography, velocity structure, wind, and source configuration are considered. The simulations show that a shallow source buried in a homogeneous elastic solid produces a complex wave train in the atmosphere consisting of P/SV and Rayleigh wave energy converted locally along the propagation path, and acoustic energy originating from , the source epicenter. Although the horizontal acoustic velocity of the latter is consistent with our data, the modeled amplitude ratios of pressure to vertical seismic velocity are too low in comparison with observations, and the characteristic differences in seismic and acoustic waveforms and spectra cannot be reproduced from a common point source. The observations therefore require a more complex source process in which the infrasonic signals are a record of only the broadband pressure excitation mechanism of the seismic LP events. The observations and numerical results can be explained by a model involving the repeated rapid pressure loss from a hydrothermal crack by venting into a shallow layer of loosely consolidated, highly permeable material. Heating by magmatic activity causes pressure to rise, periodically reaching the pressure threshold for rupture of the "valve" sealing the crack. Sudden opening of the valve generates the broadband infrasonic signal and simultaneously triggers the collapse of the crack, initiating resonance of the remaining fluid. Subtle waveform and amplitude variability of the infrasonic signals as recorded at an array 13.4 km to the NW of the volcano are attributed primarily to atmospheric boundary layer propagation effects, superimposed upon amplitude changes at the source. Copyright 2009 by the American Geophysical Union.

  4. Getting Astrophysical Information from LISA Data

    NASA Technical Reports Server (NTRS)

    Stebbins, R. T.; Bender, P. L.; Folkner, W. M.

    1997-01-01

    Gravitational wave signals from a large number of astrophysical sources will be present in the LISA data. Information about as many sources as possible must be estimated from time series of strain measurements. Several types of signals are expected to be present: simple periodic signals from relatively stable binary systems, chirped signals from coalescing binary systems, complex waveforms from highly relativistic binary systems, stochastic backgrounds from galactic and extragalactic binary systems and possibly stochastic backgrounds from the early Universe. The orbital motion of the LISA antenna will modulate the phase and amplitude of all these signals, except the isotropic backgrounds and thereby give information on the directions of sources. Here we describe a candidate process for disentangling the gravitational wave signals and estimating the relevant astrophysical parameters from one year of LISA data. Nearly all of the sources will be identified by searching with templates based on source parameters and directions.

  5. Source-independent full waveform inversion of seismic data

    DOEpatents

    Lee, Ki Ha

    2006-02-14

    A set of seismic trace data is collected in an input data set that is first Fourier transformed in its entirety into the frequency domain. A normalized wavefield is obtained for each trace of the input data set in the frequency domain. Normalization is done with respect to the frequency response of a reference trace selected from the set of seismic trace data. The normalized wavefield is source independent, complex, and dimensionless. The normalized wavefield is shown to be uniquely defined as the normalized impulse response, provided that a certain condition is met for the source. This property allows construction of the inversion algorithm disclosed herein, without any source or source coupling information. The algorithm minimizes the error between data normalized wavefield and the model normalized wavefield. The methodology is applicable to any 3-D seismic problem, and damping may be easily included in the process.

  6. The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook

    NASA Astrophysics Data System (ADS)

    Mai, P. M.

    2017-12-01

    Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.

  7. Regulation of Aspergillus nidulans CreA-Mediated Catabolite Repression by the F-Box Proteins Fbx23 and Fbx47.

    PubMed

    de Assis, Leandro José; Ulas, Mevlut; Ries, Laure Nicolas Annick; El Ramli, Nadia Ali Mohamed; Sarikaya-Bayram, Ozlem; Braus, Gerhard H; Bayram, Ozgur; Goldman, Gustavo Henrique

    2018-06-19

    The attachment of one or more ubiquitin molecules by SCF ( S kp- C ullin- F -box) complexes to protein substrates targets them for subsequent degradation by the 26S proteasome, allowing the control of numerous cellular processes. Glucose-mediated signaling and subsequent carbon catabolite repression (CCR) are processes relying on the functional regulation of target proteins, ultimately controlling the utilization of this carbon source. In the filamentous fungus Aspergillus nidulans , CCR is mediated by the transcription factor CreA, which modulates the expression of genes encoding biotechnologically relevant enzymes. Although CreA-mediated repression of target genes has been extensively studied, less is known about the regulatory pathways governing CCR and this work aimed at further unravelling these events. The Fbx23 F-box protein was identified as being involved in CCR and the Δ fbx23 mutant presented impaired xylanase production under repressing (glucose) and derepressing (xylan) conditions. Mass spectrometry showed that Fbx23 is part of an SCF ubiquitin ligase complex that is bridged via the GskA protein kinase to the CreA-SsnF-RcoA repressor complex, resulting in the degradation of the latter under derepressing conditions. Upon the addition of glucose, CreA dissociates from the ubiquitin ligase complex and is transported into the nucleus. Furthermore, casein kinase is important for CreA function during glucose signaling, although the exact role of phosphorylation in CCR remains to be determined. In summary, this study unraveled novel mechanistic details underlying CreA-mediated CCR and provided a solid basis for studying additional factors involved in carbon source utilization which could prove useful for biotechnological applications. IMPORTANCE The production of biofuels from plant biomass has gained interest in recent years as an environmentally friendly alternative to production from petroleum-based energy sources. Filamentous fungi, which naturally thrive on decaying plant matter, are of particular interest for this process due to their ability to secrete enzymes required for the deconstruction of lignocellulosic material. A major drawback in fungal hydrolytic enzyme production is the repression of the corresponding genes in the presence of glucose, a process known as carbon catabolite repression (CCR). This report provides previously unknown mechanistic insights into CCR through elucidating part of the protein-protein interaction regulatory system that governs the CreA transcriptional regulator in the reference organism Aspergillus nidulans in the presence of glucose and the biotechnologically relevant plant polysaccharide xylan. Copyright © 2018 de Assis et al.

  8. Psychoacoustics

    NASA Astrophysics Data System (ADS)

    Moore, Brian C. J.

    Psychoacoustics psychological is concerned with the relationships between the physical characteristics of sounds and their perceptual attributes. This chapter describes: the absolute sensitivity of the auditory system for detecting weak sounds and how that sensitivity varies with frequency; the frequency selectivity of the auditory system (the ability to resolve or hear out the sinusoidal components in a complex sound) and its characterization in terms of an array of auditory filters; the processes that influence the masking of one sound by another; the range of sound levels that can be processed by the auditory system; the perception and modeling of loudness; level discrimination; the temporal resolution of the auditory system (the ability to detect changes over time); the perception and modeling of pitch for pure and complex tones; the perception of timbre for steady and time-varying sounds; the perception of space and sound localization; and the mechanisms underlying auditory scene analysis that allow the construction of percepts corresponding to individual sounds sources when listening to complex mixtures of sounds.

  9. Dance expertise modulates visual sensitivity to complex biological movements.

    PubMed

    Orlandi, Andrea; Zani, Alberto; Proverbio, Alice Mado

    2017-09-01

    Motor resonance processes that occur when observing an individual perform an action may be modulated by acquired visuomotor expertise. We used the event-related potential (EEG/ERP) technique to investigate the ability to automatically recognize a subtle difference between very similar novel contemporary dance movements. Twelve professional dancers and twelve non-dancers were shown 212 pairs of videos of complex whole-body movements that lasted 3s. The second of each pair was the repetition of the previous movement or a slight variation of it (deviance). The participants were engaged in a secondary attentional task. Modulation of a larger centro-parietal N400 effect and a reduction of the Late Positivity amplitude (repetition suppression effect) were identified in response to deviant stimuli only in the dancers. Source reconstruction (swLORETA) showed activations in biological motion, body and face processing related areas, and fronto-parietal and limbic systems. The current findings provide evidence that acquired dance expertise modifies the ability to visually code whole-body complex movements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Three-Dimensional Printing of X-Ray Computed Tomography Datasets with Multiple Materials Using Open-Source Data Processing

    ERIC Educational Resources Information Center

    Sander, Ian M.; McGoldrick, Matthew T.; Helms, My N.; Betts, Aislinn; van Avermaete, Anthony; Owers, Elizabeth; Doney, Evan; Liepert, Taimi; Niebur, Glen; Liepert, Douglas; Leevy, W. Matthew

    2017-01-01

    Advances in three-dimensional (3D) printing allow for digital files to be turned into a "printed" physical product. For example, complex anatomical models derived from clinical or pre-clinical X-ray computed tomography (CT) data of patients or research specimens can be constructed using various printable materials. Although 3D printing…

  11. Teacher Evaluation in Chile: Highlights and Complexities in 13 Years of Experience

    ERIC Educational Resources Information Center

    Avalos-Bevan, Beatrice

    2018-01-01

    The paper examines the process of establishing a teacher evaluation system in Chile and its acceptance by teachers over time. The conceptual base upon which the system was established is described. Evidence is also examined from a variety of data sources and research related to the evaluation system as well as teachers' use of its results. This…

  12. Transgression, Transformation and Enlightenment: The Trickster as Poet and Teacher

    ERIC Educational Resources Information Center

    Conroy, James C.; Davis, Robert A.

    2002-01-01

    In this essay, the authors suggest that there is another, different and more ancient way of looking at the moral and social role of the teacher and the processes of education in which she is involved. This alternative perspective draws on older, more imaginative and complex sources of meaning than the latest Gallup poll or the latest adjusted…

  13. Lignocellulosic Biomass: A Sustainable Bioenergy Source for the Future.

    PubMed

    Fatma, Shabih; Hameed, Amir; Noman, Muhammad; Ahmed, Temoor; Shahid, Muhammad; Tariq, Mohsin; Sohail, Imran; Tabassum, Romana

    2018-01-01

    Increasing population and industrialization are continuously oppressing the existing energy resources and depleting the global fuel reservoirs. The elevated pollutions from the continuous consumption of non-renewable fossil fuels also seriously contaminating the surrounding environment. The use of alternate energy sources can be an environment-friendly solution to cope these challenges. Among the renewable energy sources biofuels (biomass-derived fuels) can serve as a better alternative to reduce the reliance on non-renewable fossil fuels. Bioethanol is one of the most widely consumed biofuels of today's world. The main objective of this review is to highlight the significance of lignocellulosic biomass as a potential source for the production of biofuels like bioethanol, biodiesel or biogas. We discuss the application of various methods for the bioconversion of lignocellulosic biomass to end products i.e. biofuels. The lignocellulosic biomass must be pretreated to disintegrate lignocellulosic complexes and to expose its chemical components for downstream processes. After pretreatment, the lignocellulosic biomass is then subjected to saccharification either via acidic or enzymatic hydrolysis. Thereafter, the monomeric sugars resulted from hydrolysis step are further processed into biofuel i.e. bioethanol, biodiesel or butanol etc. through the fermentation process. The fermented impure product is then purified through the distillation process to obtain pure biofuel. Renewable energy sources represent the potential fuel alternatives to overcome the global energy crises in a sustainable and eco-friendly manner. In future, biofuels may replenish the conventional non-renewable energy resources due to their renewability and several other advantages. Lignocellulosic biomass offers the most economical biomass to generate biofuels. However, extensive research is required for the commercial production of an efficient integrated biotransformation process for the production of lignocellulose mediated biofuels. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. The Unicellular State as a Point Source in a Quantum Biological System

    PubMed Central

    Torday, John S.; Miller, William B.

    2016-01-01

    A point source is the central and most important point or place for any group of cohering phenomena. Evolutionary development presumes that biological processes are sequentially linked, but neither directed from, nor centralized within, any specific biologic structure or stage. However, such an epigenomic entity exists and its transforming effects can be understood through the obligatory recapitulation of all eukaryotic lifeforms through a zygotic unicellular phase. This requisite biological conjunction can now be properly assessed as the focal point of reconciliation between biology and quantum phenomena, illustrated by deconvoluting complex physiologic traits back to their unicellular origins. PMID:27240413

  15. Effects of atmospheric variations on acoustic system performance

    NASA Technical Reports Server (NTRS)

    Nation, Robert; Lang, Stephen; Olsen, Robert; Chintawongvanich, Prasan

    1993-01-01

    Acoustic propagation over medium to long ranges in the atmosphere is subject to many complex, interacting effects. Of particular interest at this point is modeling low frequency (less than 500 Hz) propagation for the purpose of predicting ranges and bearing accuracies at which acoustic sources can be detected. A simple means of estimating how much of the received signal power propagated directly from the source to the receiver and how much was received by turbulent scattering was developed. The correlations between the propagation mechanism and detection thresholds, beamformer bearing estimation accuracies, and beamformer processing gain of passive acoustic signal detection systems were explored.

  16. The Earthquake‐Source Inversion Validation (SIV) Project

    USGS Publications Warehouse

    Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf

    2016-01-01

    Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.

  17. Phase-field simulations of GaN growth by selective area epitaxy on complex mask geometries

    DOE PAGES

    Aagesen, Larry K.; Coltrin, Michael Elliott; Han, Jung; ...

    2015-05-15

    Three-dimensional phase-field simulations of GaN growth by selective area epitaxy were performed. Furthermore, this model includes a crystallographic-orientation-dependent deposition rate and arbitrarily complex mask geometries. The orientation-dependent deposition rate can be determined from experimental measurements of the relative growth rates of low-index crystallographic facets. Growth on various complex mask geometries was simulated on both c-plane and a-plane template layers. Agreement was observed between simulations and experiment, including complex phenomena occurring at the intersections between facets. The sources of the discrepancies between simulated and experimental morphologies were also investigated. We found that the model provides a route to optimize masks andmore » processing conditions during materials synthesis for solar cells, light-emitting diodes, and other electronic and opto-electronic applications.« less

  18. Advanced light source technologies that enable high-volume manufacturing of DUV lithography extensions

    NASA Astrophysics Data System (ADS)

    Cacouris, Theodore; Rao, Rajasekhar; Rokitski, Rostislav; Jiang, Rui; Melchior, John; Burfeindt, Bernd; O'Brien, Kevin

    2012-03-01

    Deep UV (DUV) lithography is being applied to pattern increasingly finer geometries, leading to solutions like double- and multiple-patterning. Such process complexities lead to higher costs due to the increasing number of steps required to produce the desired results. One of the consequences is that the lithography equipment needs to provide higher operating efficiencies to minimize the cost increases, especially for producers of memory devices that experience a rapid decline in sales prices of these products over time. In addition to having introduced higher power 193nm light sources to enable higher throughput, we previously described technologies that also enable: higher tool availability via advanced discharge chamber gas management algorithms; improved process monitoring via enhanced on-board beam metrology; and increased depth of focus (DOF) via light source bandwidth modulation. In this paper we will report on the field performance of these technologies with data that supports the desired improvements in on-wafer performance and operational efficiencies.

  19. Working Memory Capacity as a Dynamic Process

    PubMed Central

    Simmering, Vanessa R.; Perone, Sammy

    2013-01-01

    A well-known characteristic of working memory (WM) is its limited capacity. The source of such limitations, however, is a continued point of debate. Developmental research is positioned to address this debate by jointly identifying the source(s) of limitations and the mechanism(s) underlying capacity increases. Here we provide a cross-domain survey of studies and theories of WM capacity development, which reveals a complex picture: dozens of studies from 50 papers show nearly universal increases in capacity estimates with age, but marked variation across studies, tasks, and domains. We argue that the full pattern of performance cannot be captured through traditional approaches emphasizing single causes, or even multiple separable causes, underlying capacity development. Rather, we consider WM capacity as a dynamic process that emerges from a unified cognitive system flexibly adapting to the context and demands of each task. We conclude by enumerating specific challenges for researchers and theorists that will need to be met in order to move our understanding forward. PMID:23335902

  20. Source of Chronic Inflammation in Aging.

    PubMed

    Sanada, Fumihiro; Taniyama, Yoshiaki; Muratsu, Jun; Otsu, Rei; Shimizu, Hideo; Rakugi, Hiromi; Morishita, Ryuichi

    2018-01-01

    Aging is a complex process that results from a combination of environmental, genetic, and epigenetic factors. A chronic pro-inflammatory status is a pervasive feature of aging. This chronic low-grade inflammation occurring in the absence of overt infection has been defined as "inflammaging" and represents a significant risk factor for morbidity and mortality in the elderly. The low-grade inflammation persists even after reversing pro-inflammatory stimuli such as LDL cholesterol and the renin-angiotensin system (RAS). Recently, several possible sources of chronic low-grade inflammation observed during aging and age-related diseases have been proposed. Cell senescence and dysregulation of innate immunity is one such mechanism by which persistent prolonged inflammation occurs even after the initial stimulus has been removed. Additionally, the coagulation factor that activates inflammatory signaling beyond its role in the coagulation system has been identified. This signal could be a new source of chronic inflammation and cell senescence. Here, we summarized the factors and cellular pathways/processes that are known to regulate low-grade persistent inflammation in aging and age-related disease.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aagesen, Larry K.; Coltrin, Michael Elliott; Han, Jung

    Three-dimensional phase-field simulations of GaN growth by selective area epitaxy were performed. Furthermore, this model includes a crystallographic-orientation-dependent deposition rate and arbitrarily complex mask geometries. The orientation-dependent deposition rate can be determined from experimental measurements of the relative growth rates of low-index crystallographic facets. Growth on various complex mask geometries was simulated on both c-plane and a-plane template layers. Agreement was observed between simulations and experiment, including complex phenomena occurring at the intersections between facets. The sources of the discrepancies between simulated and experimental morphologies were also investigated. We found that the model provides a route to optimize masks andmore » processing conditions during materials synthesis for solar cells, light-emitting diodes, and other electronic and opto-electronic applications.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aagesen, Larry K.; Thornton, Katsuyo, E-mail: kthorn@umich.edu; Coltrin, Michael E.

    Three-dimensional phase-field simulations of GaN growth by selective area epitaxy were performed. The model includes a crystallographic-orientation-dependent deposition rate and arbitrarily complex mask geometries. The orientation-dependent deposition rate can be determined from experimental measurements of the relative growth rates of low-index crystallographic facets. Growth on various complex mask geometries was simulated on both c-plane and a-plane template layers. Agreement was observed between simulations and experiment, including complex phenomena occurring at the intersections between facets. The sources of the discrepancies between simulated and experimental morphologies were also investigated. The model provides a route to optimize masks and processing conditions during materialsmore » synthesis for solar cells, light-emitting diodes, and other electronic and opto-electronic applications.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabriele, Fatuzzo; Michele, Mangiameli, E-mail: amichele.mangiameli@dica.unict.it; Giuseppe, Mussumeci

    The laser scanning is a technology that allows in a short time to run the relief geometric objects with a high level of detail and completeness, based on the signal emitted by the laser and the corresponding return signal. When the incident laser radiation hits the object to detect, then the radiation is reflected. The purpose is to build a three-dimensional digital model that allows to reconstruct the reality of the object and to conduct studies regarding the design, restoration and/or conservation. When the laser scanner is equipped with a digital camera, the result of the measurement process is amore » set of points in XYZ coordinates showing a high density and accuracy with radiometric and RGB tones. In this case, the set of measured points is called “point cloud” and allows the reconstruction of the Digital Surface Model. Even the post-processing is usually performed by closed source software, which is characterized by Copyright restricting the free use, free and open source software can increase the performance by far. Indeed, this latter can be freely used providing the possibility to display and even custom the source code. The experience started at the Faculty of Engineering in Catania is aimed at finding a valuable free and open source tool, MeshLab (Italian Software for data processing), to be compared with a reference closed source software for data processing, i.e. RapidForm. In this work, we compare the results obtained with MeshLab and Rapidform through the planning of the survey and the acquisition of the point cloud of a morphologically complex statue.« less

  4. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less

  5. Employing ASHRAE Standard 62-1989 in urban building environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meckler, M.

    1991-01-01

    Indoor air quality (IAQ) is a result of a complex relationship between the contamination sources in a building, the ventilation rate, and the dilution of the indoor air contaminant concentrations with outdoor air. This complex relationship is further complicated by outdoor sources used for dilution air and pollution sinks in a building which may modify or remove contaminants. This paper reports that the factors influencing IAQ in a building are: emissions from indoor contamination sources, dilution rate of outdoor ventilation air, quality of the outdoor dilution air, and systems and materials in a building that change the concentrations of contaminants.more » Emissions from contaminant sources in a building are the primary determinant of IAQ. They include building materials, consumer products, cleaners, furnishings, combustion appliances and processes, biological growth from standing water and damp surfaces and building occupants. These factors combined with the emissions from indoor air contamination sources such as synthetic building materials, modern office equipment, and cleaning and biological agents are believed to increase the levels of indoor air contamination. The physiological reactions to these contaminants, coupled with the psychosocial stresses of the modern office environment, and the wide range of human susceptibility to indoor air contaminants led to the classification of acute building sicknesses: sick building syndrome (SBS), building-related illness (BRI), and multiple chemical sensitivity (MCS).« less

  6. Sequence co-evolution gives 3D contacts and structures of protein complexes

    PubMed Central

    Hopf, Thomas A; Schärfe, Charlotta P I; Rodrigues, João P G L M; Green, Anna G; Kohlbacher, Oliver; Sander, Chris; Bonvin, Alexandre M J J; Marks, Debora S

    2014-01-01

    Protein–protein interactions are fundamental to many biological processes. Experimental screens have identified tens of thousands of interactions, and structural biology has provided detailed functional insight for select 3D protein complexes. An alternative rich source of information about protein interactions is the evolutionary sequence record. Building on earlier work, we show that analysis of correlated evolutionary sequence changes across proteins identifies residues that are close in space with sufficient accuracy to determine the three-dimensional structure of the protein complexes. We evaluate prediction performance in blinded tests on 76 complexes of known 3D structure, predict protein–protein contacts in 32 complexes of unknown structure, and demonstrate how evolutionary couplings can be used to distinguish between interacting and non-interacting protein pairs in a large complex. With the current growth of sequences, we expect that the method can be generalized to genome-wide elucidation of protein–protein interaction networks and used for interaction predictions at residue resolution. DOI: http://dx.doi.org/10.7554/eLife.03430.001 PMID:25255213

  7. Microbiological fermentation of lignocellulosic biomass: current state and prospects of mathematical modeling.

    PubMed

    Lübken, Manfred; Gehring, Tito; Wichern, Marc

    2010-02-01

    The anaerobic fermentation process has achieved growing importance in practice in recent years. Anaerobic fermentation is especially valuable because its end product is methane, a renewable energy source. While the use of renewable energy sources has accelerated substantially in recent years, their potential has not yet been sufficiently exploited. This is especially true for biogas technology. Biogas is created in a multistage process in which different microorganisms use the energy stored in carbohydrates, fats, and proteins for their metabolism. In order to produce biogas, any organic substrate that is microbiologically accessible can be used. The microbiological process in itself is extremely complex and still requires substantial research in order to be fully understood. Technical facilities for the production of biogas are thus generally scaled in a purely empirical manner. The efficiency of the process, therefore, corresponds to the optimum only in the rarest cases. An optimal production of biogas, as well as a stable plant operation requires detailed knowledge of the biochemical processes in the fermenter. The use of mathematical models can help to achieve the necessary deeper understanding of the process. This paper reviews both the history of model development and current state of the art in modeling anaerobic digestion processes.

  8. Concepts of risk assesment of complex chemical mixtures in laser pyrolysis fumes

    NASA Astrophysics Data System (ADS)

    Weber, Lothar W.; Meier, Thomas H.

    1996-01-01

    Laser-tissue interaction may generate by energy absorption a complex mixture of gaseous, volatile, semi-volatile and particular substances. At the time about 150 different components are known from IR-laser interaction with different organ tissues like liver, fat, muscle and skin. The laser-tissue interaction process thereby is dominated by heating processes, which is confirmed by the similarity of formed chemical products in comparison with conventional cooking processes for food preparation. With the identified chemical substances and relative amounts in backmind a walk along the think path of risk assessment with special reference to pyrolysis products is given. The main way of intake of pyrolysis products is the inhalative one, which results from the fine aerosols formed and the high spreading energy out of the irradiated source. The liberated amounts of irritative chemicals as (unsaturated) aldehydes, heterocycles of bad odor and possibly cancerogenic acting substances relates to some (mu) g/g of laser vaporized tissue. With regard to this exposure level in a hypothetic one cubic meter volume the occupational limit settings are far away. Even indoor air exposure levels are in nearly all cases underwent, for the content of bad smelling substances forces an effective ventilation. Up to now no laser typical chemical substance could be identified, which was not elsewhere known by frying or baking processes of meat, food or familiar. Starting with the GRAS concept of 1957 the process of risk assessment by modified food products and new ingredients is still improofing. The same process of risk assessment is governing the laser pyrolysis products of mammalian tissues. By use of sufficient suction around the laser tissue source the odor problems as well as the toxicological problems could be solved.

  9. Determination of dominant biogeochemical processes in a contaminated aquifer-wetland system using multivariate statistical analysis

    USGS Publications Warehouse

    Baez-Cazull, S. E.; McGuire, J.T.; Cozzarelli, I.M.; Voytek, M.A.

    2008-01-01

    Determining the processes governing aqueous biogeochemistry in a wetland hydrologically linked to an underlying contaminated aquifer is challenging due to the complex exchange between the systems and their distinct responses to changes in precipitation, recharge, and biological activities. To evaluate temporal and spatial processes in the wetland-aquifer system, water samples were collected using cm-scale multichambered passive diffusion samplers (peepers) to span the wetland-aquifer interface over a period of 3 yr. Samples were analyzed for major cations and anions, methane, and a suite of organic acids resulting in a large dataset of over 8000 points, which was evaluated using multivariate statistics. Principal component analysis (PCA) was chosen with the purpose of exploring the sources of variation in the dataset to expose related variables and provide insight into the biogeochemical processes that control the water chemistry of the system. Factor scores computed from PCA were mapped by date and depth. Patterns observed suggest that (i) fermentation is the process controlling the greatest variability in the dataset and it peaks in May; (ii) iron and sulfate reduction were the dominant terminal electron-accepting processes in the system and were associated with fermentation but had more complex seasonal variability than fermentation; (iii) methanogenesis was also important and associated with bacterial utilization of minerals as a source of electron acceptors (e.g., barite BaSO4); and (iv) seasonal hydrological patterns (wet and dry periods) control the availability of electron acceptors through the reoxidation of reduced iron-sulfur species enhancing iron and sulfate reduction. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  10. Evolutionary trends in directional hearing

    PubMed Central

    Carr, Catherine E.; Christensen-Dalsgaard, Jakob

    2016-01-01

    Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850

  11. Cross-regulation among arabinose, xylose and rhamnose utilization systems in E. coli.

    PubMed

    Choudhury, D; Saini, S

    2018-02-01

    Bacteria frequently encounter multiple sugars in their natural surroundings. While the dynamics of utilization of glucose-containing sugar mixtures have been well investigated, there are few reports addressing regulation of utilization of glucose-free mixtures particularly pentoses. These sugars comprise a considerable fraction in hemicellulose which can be converted by suitable biocatalysts to biofuels and other value-added products. Hence, understanding of transcriptional cross-regulation among different pentose sugar utilization systems is essential for successful development of industrial strains. In this work, we study mixed-sugar utilization with respect to three secondary carbon sources - arabinose, xylose and rhamnose at single-cell resolution in Escherichia coli. Our results reveal that hierarchical utilization among these systems is not strict but rather can be eliminated or reversed by altering the relative ratios of the preferred and nonpreferred sugars. Since transcriptional cross-regulation among pentose sugar systems operates through competitive binding of noncognate sugar-regulator complex, altering sugar concentrations is thought to eliminate nonspecific binding by affecting concentration of the regulator - sugar complexes. Plant biomass comprises of hexose and pentose sugar mixtures. These sugars are processed by micro-organisms to form products like biofuels, polymers etc. One of the major challenges with mixed-sugar processing by micro-organisms is hierarchical utilization of sugars due to cross-regulation among sugar systems. In this work, we discuss cross-regulation among three secondary carbon sources - arabinose, xylose and rhamnose. Our results show that cross-regulation between pentose sugars is complex with multiple layers of regulation. These aspects need to be addressed for effective design of processes to extract energy from biomass. © 2017 The Society for Applied Microbiology.

  12. Overview of Solar Radio Bursts and their Sources

    NASA Astrophysics Data System (ADS)

    Golla, Thejappa; MacDowall, Robert J.

    2018-06-01

    Properties of radio bursts emitted by the Sun at frequencies below tens of MHz are reviewed. In this frequency range, the most prominent radio emissions are those of solar type II, complex type III and solar type IV radio bursts, excited probably by the energetic electron populations accelerated in completely different environments: (1) type II bursts are due to non-relativistic electrons accelerated by the CME driven interplanetary shocks, (2) complex type III bursts are due to near-relativistic electrons accelerated either by the solar flare reconnection process or by the SEP shocks, and (3) type IV bursts are due to relativistic electrons, trapped in the post-eruption arcades behind CMEs; these relativistic electrons probably are accelerated by the continued reconnection processes occurring beneath the CME. These radio bursts, which can serve as the natural plasma probes traversing the heliosphere by providing information about various crucial space plasma parameters, are also an ideal instrument for investigating acceleration mechanisms responsible for the high energy particles. The rich collection of valuable high quality radio and high time resolution in situ wave data from the WAVES experiments of the STEREO A, STEREO B and WIND spacecraft has provided an unique opportunity to study these different radio phenomena and understand the complex physics behind their excitation. We have developed Monte Carlo simulation techniques to estimate the propagation effects on the observed characteristics of these low frequency radio bursts. We will present some of the new results and describe how one can use these radio burst observations for space weather studies. We will also describe some of the non-linear plasma processes detected in the source regions of both solar type III and type II radio bursts. The analysis and simulation techniques used in these studies will be of immense use for future space based radio observations.

  13. Lessons learned from a pilot implementation of the UMLS information sources map.

    PubMed

    Miller, P L; Frawley, S J; Wright, L; Roderer, N K; Powsner, S M

    1995-01-01

    To explore the software design issues involved in implementing an operational information sources map (ISM) knowledge base (KB) and system of navigational tools that can help medical users access network-based information sources relevant to a biomedical question. A pilot biomedical ISM KB and associated client-server software (ISM/Explorer) have been developed to help students, clinicians, researchers, and staff access network-based information sources, as part of the National Library of Medicine's (NLM) multi-institutional Unified Medical Language System (UMLS) project. The system allows the user to specify and constrain a search for a biomedical question of interest. The system then returns a list of sources matching the search. At this point the user may request 1) further information about a source, 2) that the list of sources be regrouped by different criteria to allow the user to get a better overall appreciation of the set of retrieved sources as a whole, or 3) automatic connection to a source. The pilot system operates in client-server mode and currently contains coded information for 121 sources. It is in routine use from approximately 40 workstations at the Yale School of Medicine. The lessons that have been learned are that: 1) it is important to make access to different versions of a source as seamless as possible, 2) achieving seamless, cross-platform access to heterogeneous sources is difficult, 3) significant differences exist between coding the subject content of an electronic information resource versus that of an article or a book, 4) customizing the ISM to multiple institutions entails significant complexities, and 5) there are many design trade-offs between specifying searches and viewing sets of retrieved sources that must be taken into consideration. An ISM KB and navigational tools have been constructed. In the process, much has been learned about the complexities of development and evaluation in this new environment, which are different from those for Gopher, wide area information servers (WAIS), World-Wide-Web (WWW), and MOSAIC resources.

  14. Printable Solid-State Lithium-Ion Batteries: A New Route toward Shape-Conformable Power Sources with Aesthetic Versatility for Flexible Electronics.

    PubMed

    Kim, Se-Hee; Choi, Keun-Ho; Cho, Sung-Ju; Choi, Sinho; Park, Soojin; Lee, Sang-Young

    2015-08-12

    Forthcoming flexible/wearable electronic devices with shape diversity and mobile usability garner a great deal of attention as an innovative technology to bring unprecedented changes in our daily lives. From the power source point of view, conventional rechargeable batteries (one representative example is a lithium-ion battery) with fixed shapes and sizes have intrinsic limitations in fulfilling design/performance requirements for the flexible/wearable electronics. Here, as a facile and efficient strategy to address this formidable challenge, we demonstrate a new class of printable solid-state batteries (referred to as "PRISS batteries"). Through simple stencil printing process (followed by ultraviolet (UV) cross-linking), solid-state composite electrolyte (SCE) layer and SCE matrix-embedded electrodes are consecutively printed on arbitrary objects of complex geometries, eventually leading to fully integrated, multilayer-structured PRISS batteries with various form factors far beyond those achievable by conventional battery technologies. Tuning rheological properties of SCE paste and electrode slurry toward thixotropic fluid characteristics, along with well-tailored core elements including UV-cured triacrylate polymer and high boiling point electrolyte, is a key-enabling technology for the realization of PRISS batteries. This process/material uniqueness allows us to remove extra processing steps (related to solvent drying and liquid-electrolyte injection) and also conventional microporous separator membranes, thereupon enabling the seamless integration of shape-conformable PRISS batteries (including letters-shaped ones) into complex-shaped objects. Electrochemical behavior of PRISS batteries is elucidated via an in-depth analysis of cell impedance, which provides a theoretical basis to enable sustainable improvement of cell performance. We envision that PRISS batteries hold great promise as a reliable and scalable platform technology to open a new concept of cell architecture and fabrication route toward flexible power sources with exceptional shape conformability and aesthetic versatility.

  15. A source-controlled data center network model.

    PubMed

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  16. A source-controlled data center network model

    PubMed Central

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  17. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    NASA Astrophysics Data System (ADS)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  18. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D

    PubMed Central

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron

    2017-01-01

    Abstract Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. PMID:28814063

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, L. X.; Zhang, X.; Lockard, J. V.

    Transient molecular structures along chemical reaction pathways are important for predicting molecular reactivity, understanding reaction mechanisms, as well as controlling reaction pathways. During the past decade, X-ray transient absorption spectroscopy (XTA, or LITR-XAS, laser-initiated X-ray absorption spectroscopy), analogous to the commonly used optical transient absorption spectroscopy, has been developed. XTA uses a laser pulse to trigger a fundamental chemical process, and an X-ray pulse(s) to probe transient structures as a function of the time delay between the pump and probe pulses. Using X-ray pulses with high photon flux from synchrotron sources, transient electronic and molecular structures of metal complexes havemore » been studied in disordered media from homogeneous solutions to heterogeneous solution-solid interfaces. Several examples from the studies at the Advanced Photon Source in Argonne National Laboratory are summarized, including excited-state metalloporphyrins, metal-to-ligand charge transfer (MLCT) states of transition metal complexes, and charge transfer states of metal complexes at the interface with semiconductor nanoparticles. Recent developments of the method are briefly described followed by a future prospective of XTA. It is envisioned that concurrent developments in X-ray free-electron lasers and synchrotron X-ray facilities as well as other table-top laser-driven femtosecond X-ray sources will make many breakthroughs and realise dreams of visualizing molecular movies and snapshots, which ultimately enable chemical reaction pathways to be controlled.« less

  20. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    PubMed

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  1. sources and processes identification for Zn cycling in the Seine river, France

    NASA Astrophysics Data System (ADS)

    gelabert, A.; Jouvin, D.; Morin, G.; Louvat, P.; Guinoiseau, D.; Benedetti, M. F.

    2011-12-01

    Because the availability of global freshwater stocks is predicted to become a major problem in a near future, new directives on water policy have been established in Europe. As a result, an accurate determination of the ecological status of the Seine river watershed is required. However, important evaluation limitation still exist, partly because the metal cycling in this system is not fully understood with for instance half the Seine river Zn not having clearly identified sources. Recent developments in isotopic measurements for new stable isotopes (Zn, Cu) allow many progresses in understanding the dynamics of metals in natural systems. But this technique alone does not always provide a precise distinction between mixing of water sources and biochemical processes able to induce isotopic fractionation. Along with an isotopic approach, this study propose to use XAS (X-ray Absorption Spectroscopy) to determine precisely the speciation of Zn complexes, and thus to define the proportion of water mixing vs. processes for Zn transfer in the watershed. A geographical sampling transect has been performed downstream Paris. Significant isotopic signature variations have been observed, varying from δ66Zn = 0.04 ± 0.04 to 0.18 ± 0.04 in the particulate part, and from δ66Zn= -0.28 ± 0.04 to 0.08 ± 0.04 for the dissolved part. The XAS analysis performed on the same samples at the Zn K-edge confirmed this heterogeneity by showing different speciations with a major contribution of sulfides, iron oxides and organic ligands. Interestingly, the wastewater treatment plant in Achères constitutes an important location in the system by contributing to the enrichment of heavy Zn to the Seine River particulate material. This change in isotopic signature follows a change in Zn speciation with decrease in sulfides contribution after Achères. A second important location is the confluence between the Seine and a minor river (Epte river) with a significant decrease in the δ66Zn for the particulate part. However, no major changes in Zn speciation have been observed. The cause for this isotopic decrease remains unclear but could be either 1) the mobilization of unknown sources, especially in this area (for the sampling point after the convergence) where the river is connected to numerous small lakes, or 2) the presence of biogeochemical processes able to induce a Zn isotopic fractionation. For instance, microorganisms present in freshwaters are known to be able to fractionate Zn during sorption processes, and they do not necessarily induce a change in the first shells of the Zn complexes if the sorbed metal was previously complexed with organic ligands in the river (fulvic substances with carboxylic or phenolic functional groups for instance). Although additional studies on the Seine river need to be conducted in order to reach a more complete understanding of this watershed functioning, these important results demonstrate the value of combining both the XAS and isotopic approaches in order to understand the behaviour of metals in such complex environments.

  2. Speciation, photosensitivity, and reactions of transition metal ions in atmospheric droplets

    NASA Astrophysics Data System (ADS)

    Weschler, C. J.; Mandich, M. L.; Graedel, T. E.

    1986-04-01

    Dissolved transition metal ions (TMI) are common constituents of atmospheric droplets. They are known to catalyze sulfur oxidation in droplets and are suspected of being involved in other chemical processes as well. We have reviewed the relevant equilibrium constants and chemical reactions of the major TMI (iron, manganese, copper, and nickel), their ability to form complexes in aqueous solution, and their potential involvement in photochemical processes in atmospheric droplets. Among the results are the following: (1) The major Fe(III) species in atmospheric water droplets are [Fe(OH)(H2O)5]2+, [Fe(OH)2(H2O)4]+, and [Fe(SO3)(H2O)5]+; the partitioning among these complexes is a function of pH. In contrast, Cu(II), Mn(II), and Ni(II) exist almost entirely in the droplets as hexaquo complexes. (2) Within the tropospheric solar spectrum, some of the complexes of Fe(III) have large absorption cross-sections. In this work we report cross-section data for several of the complexes. Absorption of solar photons by such complexes is generally followed by cleavage, which in the same process reduces the iron (III) atom and produces a reactive free radical. This mechanism has the potential to be a significant and heretofore unappreciated source of free radicals in atmospheric droplets. (3) TMI participate in redox reactions with H2O2 and its associated species HO2· and O2-. These reactions furnish the potential for catalytic cycles involving TMI in atmospheric droplets under a variety of illumination and acidity conditions. (4) A number of organic processes in atmospheric droplets may involve TMI. Among these processes are the production and destruction of alkylhydroperoxides, the chemical chains linking RO2· radicals to stable alcohols and acids, and the oxidation of aliphatic aldehydes to organic acids.

  3. Integrated generation of complex optical quantum states and their coherent control

    NASA Astrophysics Data System (ADS)

    Roztocki, Piotr; Kues, Michael; Reimer, Christian; Romero Cortés, Luis; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T.; Little, Brent E.; Moss, David J.; Caspani, Lucia; Azaña, José; Morandotti, Roberto

    2018-01-01

    Complex optical quantum states based on entangled photons are essential for investigations of fundamental physics and are the heart of applications in quantum information science. Recently, integrated photonics has become a leading platform for the compact, cost-efficient, and stable generation and processing of optical quantum states. However, onchip sources are currently limited to basic two-dimensional (qubit) two-photon states, whereas scaling the state complexity requires access to states composed of several (<2) photons and/or exhibiting high photon dimensionality. Here we show that the use of integrated frequency combs (on-chip light sources with a broad spectrum of evenly-spaced frequency modes) based on high-Q nonlinear microring resonators can provide solutions for such scalable complex quantum state sources. In particular, by using spontaneous four-wave mixing within the resonators, we demonstrate the generation of bi- and multi-photon entangled qubit states over a broad comb of channels spanning the S, C, and L telecommunications bands, and control these states coherently to perform quantum interference measurements and state tomography. Furthermore, we demonstrate the on-chip generation of entangled high-dimensional (quDit) states, where the photons are created in a coherent superposition of multiple pure frequency modes. Specifically, we confirm the realization of a quantum system with at least one hundred dimensions. Moreover, using off-the-shelf telecommunications components, we introduce a platform for the coherent manipulation and control of frequencyentangled quDit states. Our results suggest that microcavity-based entangled photon state generation and the coherent control of states using accessible telecommunications infrastructure introduce a powerful and scalable platform for quantum information science.

  4. Free Energy and Virtual Reality in Neuroscience and Psychoanalysis: A Complexity Theory of Dreaming and Mental Disorder

    PubMed Central

    Hopkins, Jim

    2016-01-01

    The main concepts of the free energy (FE) neuroscience developed by Karl Friston and colleagues parallel those of Freud's Project for a Scientific Psychology. In Hobson et al. (2014) these include an innate virtual reality generator that produces the fictive prior beliefs that Freud described as the primary process. This enables Friston's account to encompass a unified treatment—a complexity theory—of the role of virtual reality in both dreaming and mental disorder. In both accounts the brain operates to minimize FE aroused by sensory impingements—including interoceptive impingements that report compliance with biological imperatives—and constructs a representation/model of the causes of impingement that enables this minimization. In Friston's account (variational) FE equals complexity minus accuracy, and is minimized by increasing accuracy and decreasing complexity. Roughly the brain (or model) increases accuracy together with complexity in waking. This is mediated by consciousness-creating active inference—by which it explains sensory impingements in terms of perceptual experiences of their causes. In sleep it reduces complexity by processes that include both synaptic pruning and consciousness/virtual reality/dreaming in REM. The consciousness-creating active inference that effects complexity-reduction in REM dreaming must operate on FE-arousing data distinct from sensory impingement. The most relevant source is remembered arousals of emotion, both recent and remote, as processed in SWS and REM on “active systems” accounts of memory consolidation/reconsolidation. Freud describes these remembered arousals as condensed in the dreamwork for use in the conscious contents of dreams, and similar condensation can be seen in symptoms. Complexity partly reflects emotional conflict and trauma. This indicates that dreams and symptoms are both produced to reduce complexity in the form of potentially adverse (traumatic or conflicting) arousals of amygdala-related emotions. Mental disorder is thus caused by computational complexity together with mechanisms like synaptic pruning that have evolved for complexity-reduction; and important features of disorder can be understood in these terms. Details of the consilience among Freudian, systems consolidation, and complexity-reduction accounts appear clearly in the analysis of a single fragment of a dream, indicating also how complexity reduction proceeds by a process resembling Bayesian model selection. PMID:27471478

  5. Two Independent Frontal Midline Theta Oscillations during Conflict Detection and Adaptation in a Simon-Type Manual Reaching Task.

    PubMed

    Töllner, Thomas; Wang, Yijun; Makeig, Scott; Müller, Hermann J; Jung, Tzyy-Ping; Gramann, Klaus

    2017-03-01

    One of the most firmly established factors determining the speed of human behavioral responses toward action-critical stimuli is the spatial correspondence between the stimulus and response locations. If both locations match, the time taken for response production is markedly reduced relative to when they mismatch, a phenomenon called the Simon effect. While there is a consensus that this stimulus-response (S-R) conflict is associated with brief (4-7 Hz) frontal midline theta (fmθ) complexes generated in medial frontal cortex, it remains controversial (1) whether there are multiple, simultaneously active theta generator areas in the medial frontal cortex that commonly give rise to conflict-related fmθ complexes; and if so, (2) whether they are all related to the resolution of conflicting task information. Here, we combined mental chronometry with high-density electroencephalographic measures during a Simon-type manual reaching task and used independent component analysis and time-frequency domain statistics on source-level activities to model fmθ sources. During target processing, our results revealed two independent fmθ generators simultaneously active in or near anterior cingulate cortex, only one of them reflecting the correspondence between current and previous S-R locations. However, this fmθ response is not exclusively linked to conflict but also to other, conflict-independent processes associated with response slowing. These results paint a detailed picture regarding the oscillatory correlates of conflict processing in Simon tasks, and challenge the prevalent notion that fmθ complexes induced by conflicting task information represent a unitary phenomenon related to cognitive control, which governs conflict processing across various types of response-override tasks. SIGNIFICANCE STATEMENT Humans constantly monitor their environment for and adjust their cognitive control settings in response to conflicts, an ability that arguably paves the way for survival in ever-changing situations. Anterior cingulate-generated frontal midline theta (fmθ) complexes have been hypothesized to play a role in this conflict-monitoring function. However, it remains a point of contention whether fmθ complexes govern conflict processing in a unitary, paradigm-nonspecific manner. Here, we identified two independent fmθ oscillations triggered during a Simon-type task, only one of them reflecting current and previous conflicts. Importantly, this signal differed in various respects (cortical origin, intertrial history) from fmθ phenomena in other response-override tasks, challenging the prevalent notion of conflict-induced fmθ as a unitary phenomenon associated with the resolution of conflict. Copyright © 2017 the authors 0270-6474/17/372505-12$15.00/0.

  6. Pure sources and efficient detectors for optical quantum information processing

    NASA Astrophysics Data System (ADS)

    Zielnicki, Kevin

    Over the last sixty years, classical information theory has revolutionized the understanding of the nature of information, and how it can be quantified and manipulated. Quantum information processing extends these lessons to quantum systems, where the properties of intrinsic uncertainty and entanglement fundamentally defy classical explanation. This growing field has many potential applications, including computing, cryptography, communication, and metrology. As inherently mobile quantum particles, photons are likely to play an important role in any mature large-scale quantum information processing system. However, the available methods for producing and detecting complex multi-photon states place practical limits on the feasibility of sophisticated optical quantum information processing experiments. In a typical quantum information protocol, a source first produces an interesting or useful quantum state (or set of states), perhaps involving superposition or entanglement. Then, some manipulations are performed on this state, perhaps involving quantum logic gates which further manipulate or entangle the intial state. Finally, the state must be detected, obtaining some desired measurement result, e.g., for secure communication or computationally efficient factoring. The work presented here concerns the first and last stages of this process as they relate to photons: sources and detectors. Our work on sources is based on the need for optimized non-classical states of light delivered at high rates, particularly of single photons in a pure quantum state. We seek to better understand the properties of spontaneous parameteric downconversion (SPDC) sources of photon pairs, and in doing so, produce such an optimized source. We report an SPDC source which produces pure heralded single photons with little or no spectral filtering, allowing a significant rate enhancement. Our work on detectors is based on the need to reliably measure single-photon states. We have focused on optimizing the detection efficiency of visible light photon counters (VLPCs), a single-photon detection technology that is also capable of resolving photon number states. We report a record-breaking quantum efficiency of 91 +/- 3% observed with our detection system. Both sources and detectors are independently interesting physical systems worthy of study, but together they promise to enable entire new classes and applications of information based on quantum mechanics.

  7. Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models

    NASA Astrophysics Data System (ADS)

    Pallant, Amy; Lee, Hee-Sun

    2015-04-01

    Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students ( N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation tasks with three increasingly complex dynamic climate models. Each scientific argumentation task consisted of four parts: multiple-choice claim, openended explanation, five-point Likert scale uncertainty rating, and open-ended uncertainty rationale. We coded 1,294 scientific arguments in terms of a claim's consistency with current scientific consensus, whether explanations were model based or knowledge based and categorized the sources of uncertainty (personal vs. scientific). We used chi-square and ANOVA tests to identify significant patterns. Results indicate that (1) a majority of students incorporated models as evidence to support their claims, (2) most students used model output results shown on graphs to confirm their claim rather than to explain simulated molecular processes, (3) students' dependence on model results and their uncertainty rating diminished as the dynamic climate models became more and more complex, (4) some students' misconceptions interfered with observing and interpreting model results or simulated processes, and (5) students' uncertainty sources reflected more frequently on their assessment of personal knowledge or abilities related to the tasks than on their critical examination of scientific evidence resulting from models. These findings have implications for teaching and research related to the integration of scientific argumentation and modeling practices to address complex Earth systems.

  8. Autonomous perception and decision making in cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Sarkar, Soumik

    2011-07-01

    The cyber-physical system (CPS) is a relatively new interdisciplinary technology area that includes the general class of embedded and hybrid systems. CPSs require integration of computation and physical processes that involves the aspects of physical quantities such as time, energy and space during information processing and control. The physical space is the source of information and the cyber space makes use of the generated information to make decisions. This dissertation proposes an overall architecture of autonomous perception-based decision & control of complex cyber-physical systems. Perception involves the recently developed framework of Symbolic Dynamic Filtering for abstraction of physical world in the cyber space. For example, under this framework, sensor observations from a physical entity are discretized temporally and spatially to generate blocks of symbols, also called words that form a language. A grammar of a language is the set of rules that determine the relationships among words to build sentences. Subsequently, a physical system is conjectured to be a linguistic source that is capable of generating a specific language. The proposed technology is validated on various (experimental and simulated) case studies that include health monitoring of aircraft gas turbine engines, detection and estimation of fatigue damage in polycrystalline alloys, and parameter identification. Control of complex cyber-physical systems involve distributed sensing, computation, control as well as complexity analysis. A novel statistical mechanics-inspired complexity analysis approach is proposed in this dissertation. In such a scenario of networked physical systems, the distribution of physical entities determines the underlying network topology and the interaction among the entities forms the abstract cyber space. It is envisioned that the general contributions, made in this dissertation, will be useful for potential application areas such as smart power grids and buildings, distributed energy systems, advanced health care procedures and future ground and air transportation systems.

  9. Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data.

    PubMed

    Dinov, Ivo D

    2016-01-01

    Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.

  10. @neurIST: infrastructure for advanced disease management through integration of heterogeneous data, computing, and complex processing services.

    PubMed

    Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven

    2010-11-01

    The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.

  11. Functional Macroautophagy Induction by Influenza A Virus without a Contribution to Major Histocompatibility Complex Class II-Restricted Presentation▿†

    PubMed Central

    Comber, Joseph D.; Robinson, Tara M.; Siciliano, Nicholas A.; Snook, Adam E.; Eisenlohr, Laurence C.

    2011-01-01

    Major histocompatibility complex (MHC) class II-presented peptides can be derived from both exogenous (extracellular) and endogenous (biosynthesized) sources of antigen. Although several endogenous antigen-processing pathways have been reported, little is known about their relative contributions to global CD4+ T cell responses against complex antigens. Using influenza virus for this purpose, we assessed the role of macroautophagy, a process in which cytosolic proteins are delivered to the lysosome by de novo vesicle formation and membrane fusion. Influenza infection triggered productive macroautophagy, and autophagy-dependent presentation was readily observed with model antigens that naturally traffic to the autophagosome. Furthermore, treatments that enhance or inhibit macroautophagy modulated the level of presentation from these model antigens. However, validated enzyme-linked immunospot (ELISpot) assays of influenza-specific CD4+ T cells from infected mice using a variety of antigen-presenting cells, including primary dendritic cells, revealed no detectable macroautophagy-dependent component. In contrast, the contribution of proteasome-dependent endogenous antigen processing to the global influenza CD4+ response was readily appreciated. The contribution of macroautophagy to the MHC class II-restricted response may vary depending upon the pathogen. PMID:21525345

  12. Information processing of earth resources data

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Bryant, N. A.

    1982-01-01

    Current trends in the use of remotely sensed data include integration of multiple data sources of various formats and use of complex models. These trends have placed a strain on information processing systems because an enormous number of capabilities are needed to perform a single application. A solution to this problem is to create a general set of capabilities which can perform a wide variety of applications. General capabilities for the Image-Based Information System (IBIS) are outlined in this report. They are then cross-referenced for a set of applications performed at JPL.

  13. Iron Supply and Demand in an Antarctic Shelf Ecosystem

    NASA Astrophysics Data System (ADS)

    McGillicuddy, D. J., Jr.; Sedwick, P.; Dinniman, M. S.; Arrigo, K. R.; Bibby, T. S.; Greenan, B. J. W.; Hofmann, E. E.; Klinck, J. M., II; Smith, W.; Mack, S. L.; Marsay, C. M.; Sohst, B. M.; van Dijken, G.

    2016-02-01

    The Ross Sea sustains a rich ecosystem and is the most productive sector of the Southern Ocean. Most of this production occurs within a polynya during the November-February period, when the availability of dissolved iron (dFe) is thought to exert the major control on phytoplankton growth. Here we combine new data on the distribution of dFe, high-resolution model simulations of ice melt and regional circulation, and satellite-based estimates of primary production to quantify iron supply and demand over the Ross Sea continental shelf. Our analysis suggests that the largest sources of dFe to the euphotic zone are wintertime mixing and melting sea ice, with a lesser input from intrusions of Circumpolar Deep Water, and a small amount from melting glacial ice. Together these sources are in approximate balance with the annual biological dFe demand inferred from satellite-based productivity algorithms, although both the supply and demand estimates have large uncertainties. Our findings illustrate the complexities of iron cycling in the Southern Ocean, highlighting the heterogeneity of the underlying processes along the Antarctic continental margin. Explicit representation of these complexities, and the temporal variability in both proximate and ultimate sources of iron, will be necessary to understand how a changing climate will affect this important ecosystem and its influence on biogeochemical cycles. Reduction of the present uncertainties in iron supply and demand will require coupled observational and modeling systems capable of resolving the wide range of physical, biological, and chemical processes involved.

  14. Automatic analysis of online image data for law enforcement agencies by concept detection and instance search

    NASA Astrophysics Data System (ADS)

    de Boer, Maaike H. T.; Bouma, Henri; Kruithof, Maarten C.; ter Haar, Frank B.; Fischer, Noëlle M.; Hagendoorn, Laurens K.; Joosten, Bart; Raaijmakers, Stephan

    2017-10-01

    The information available on-line and off-line, from open as well as from private sources, is growing at an exponential rate and places an increasing demand on the limited resources of Law Enforcement Agencies (LEAs). The absence of appropriate tools and techniques to collect, process, and analyze the volumes of complex and heterogeneous data has created a severe information overload. If a solution is not found, the impact on law enforcement will be dramatic, e.g. because important evidence is missed or the investigation time is too long. Furthermore, there is an uneven level of capabilities to deal with the large volumes of complex and heterogeneous data that come from multiple open and private sources at national level across the EU, which hinders cooperation and information sharing. Consequently, there is a pertinent need to develop tools, systems and processes which expedite online investigations. In this paper, we describe a suite of analysis tools to identify and localize generic concepts, instances of objects and logos in images, which constitutes a significant portion of everyday law enforcement data. We describe how incremental learning based on only a few examples and large-scale indexing are addressed in both concept detection and instance search. Our search technology allows querying of the database by visual examples and by keywords. Our tools are packaged in a Docker container to guarantee easy deployment on a system and our tools exploit possibilities provided by open source toolboxes, contributing to the technical autonomy of LEAs.

  15. Nature and provenance of the Beishan Complex, southernmost Central Asian Orogenic Belt

    NASA Astrophysics Data System (ADS)

    Zheng, Rongguo; Li, Jinyi; Xiao, Wenjiao; Zhang, Jin

    2018-03-01

    The ages and origins of metasedimentary rocks, which were previously mapped as Precambrian, are critical in rebuilding the orogenic process and better understanding the Phanerozoic continental growth in the Central Asian Orogenic Belt (CAOB). The Beishan Complex was widely distributed in the southern Beishan Orogenic Collage, southernmost CAOB, and their ages and tectonic affinities are still in controversy. The Beishan Complex was previously proposed as fragments drifted from the Tarim Craton, Neoproterozoic Block or Phanerozoic accretionary complex. In this study, we employ detrital zircon age spectra to constrain ages and provenances of metasedimentary sequences of the Beishan Complex in the Chuanshanxun area. The metasedimentary rocks here are dominated by zircons with Paleoproterozoic-Mesoproterozoic age ( 1160-2070 Ma), and yield two peak ages at 1454 and 1760 Ma. One sample yielded a middle Permian peak age (269 Ma), which suggests that the metasedimentary sequences were deposited in the late Paleozoic. The granitoid and dioritic dykes, intruding into the metasedimentary sequences, exhibit zircon U-Pb ages of 268 and 261 Ma, respectively, which constrain the minimum deposit age of the metasedimentary sequences. Zircon U-Pb ages of amphibolite (274 and 216 Ma) indicate that they might be affected by multi-stage metamorphic events. The Beishan Complex was not a fragment drifted from the Tarim Block or Dunhuang Block, and none of cratons or blocks surrounding Beishan Orogenic Collage was the sole material source of the Beishan Complex due to obviously different age spectra. Instead, 1.4 Ga marginal accretionary zones of the Columbia supercontinent might have existed in the southern CAOB, and may provide the main source materials for the sedimentary sequences in the Beishan Complex.

  16. Speciated measurements of semivolatile and intermediate volatility organic compounds (S/IVOCs) in a pine forest during BEACHON-RoMBAS 2011

    DOE PAGES

    Chan, A. W. H.; Kreisberg, N. M.; Hohaus, T.; ...

    2016-02-02

    Understanding organic composition of gases and particles is essential to identifying sources and atmospheric processing leading to organic aerosols (OA), but atmospheric chemical complexity and the analytical techniques available often limit such analysis. Here we present speciated measurements of semivolatile and intermediate volatility organic compounds (S/IVOCs) using a novel dual-use instrument (SV-TAG-AMS) deployed at Manitou Forest, CO, during the Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H 2O, Organics & Nitrogen – Rocky Mountain Biogenic Aerosol Study (BEACHON-RoMBAS) 2011 campaign. This instrument provides on-line speciation of ambient organic compounds with 2 h time resolution. The species in this volatility range aremore » complex in composition, but their chemical identities reveal potential sources. Observed compounds of biogenic origin include sesquiterpenes with molecular formula C 15H 24 (e.g., β-caryophyllene and longifolene), which were most abundant at night. A variety of other biogenic compounds were observed, including sesquiterpenoids with molecular formula C 15H 22, abietatriene and other terpenoid compounds. Many of these compounds have been identified in essential oils and branch enclosure studies but were observed in ambient air for the first time in our study. Semivolatile polycyclic aromatic hydrocarbons (PAHs) and alkanes were observed with highest concentrations during the day and the dependence on temperature suggests the role of an evaporative source. Using statistical analysis by positive matrix factorization (PMF), we classify observed S/IVOCs by their likely sources and processes, and characterize them based on chemical composition. The total mass concentration of elutable S/IVOCs was estimated to be on the order of 0.7 µg m –3 and their volatility distributions are estimated for modeling aerosol formation chemistry.« less

  17. Dynamics of the Wulong Landslide Revealed by Broadband Seismic Records

    NASA Astrophysics Data System (ADS)

    Huang, X.; Dan, Y.

    2016-12-01

    Long-period seismic signals are frequently used to trace the dynamic process of large scale landslides. The catastrophic WuLong landslide occurred at 14:51 on 5 June 2009 (Beijing time, UTC+8) in Wulong Prefecture, Southwest China. The topography in landslide area varies dramatically, enhancing the complexity in its movement characteristics. The mass started sliding northward on the upper part of the cliff located upon the west slope of the Tiejianggou gully, and shifted its movement direction to northeastward after being blocked by stable bedrock in front, leaving a scratch zone. The sliding mass then moved downward along the west slope of the gully until it collided with the east slope, and broke up into small pieces after the collision, forming a debris flow along the gully. We use long-period seismic signals extracted from eight broadband seismic stations within 250 km of the landslide to estimate its source time functions. Combining with topographic surveys done before and after the event, we can also resolve kinematic parameters of sliding mass, i.e. velocities, displacements and trajectories, perfectly characterizing its movement features. The runout trajectory deduced from source time functions is consistent with the sliding path, including two direction changing processes, corresponding to scratching the western bedrock and collision with the east slope respectively. Topographic variations can be reflected from estimated velocities. The maximum velocity of the sliding mass reaches 35 m/s before the collision with the east slope of the Tiejianggou gully, resulting from the height difference between the source zone and the deposition zone. What is important is that dynamics of scratching and collision can be characterized by source time functions. Our results confirm that long-period seismic signals are sufficient to characterize dynamics and kinematics of large scale landslides which occur in a region with complex topography.

  18. Speciated measurements of semivolatile and intermediate volatility organic compounds (S/IVOCs) in a pine forest during BEACHON-RoMBAS 2011

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, A. W. H.; Kreisberg, N. M.; Hohaus, T.

    Understanding organic composition of gases and particles is essential to identifying sources and atmospheric processing leading to organic aerosols (OA), but atmospheric chemical complexity and the analytical techniques available often limit such analysis. Here we present speciated measurements of semivolatile and intermediate volatility organic compounds (S/IVOCs) using a novel dual-use instrument (SV-TAG-AMS) deployed at Manitou Forest, CO, during the Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H 2O, Organics & Nitrogen – Rocky Mountain Biogenic Aerosol Study (BEACHON-RoMBAS) 2011 campaign. This instrument provides on-line speciation of ambient organic compounds with 2 h time resolution. The species in this volatility range aremore » complex in composition, but their chemical identities reveal potential sources. Observed compounds of biogenic origin include sesquiterpenes with molecular formula C 15H 24 (e.g., β-caryophyllene and longifolene), which were most abundant at night. A variety of other biogenic compounds were observed, including sesquiterpenoids with molecular formula C 15H 22, abietatriene and other terpenoid compounds. Many of these compounds have been identified in essential oils and branch enclosure studies but were observed in ambient air for the first time in our study. Semivolatile polycyclic aromatic hydrocarbons (PAHs) and alkanes were observed with highest concentrations during the day and the dependence on temperature suggests the role of an evaporative source. Using statistical analysis by positive matrix factorization (PMF), we classify observed S/IVOCs by their likely sources and processes, and characterize them based on chemical composition. The total mass concentration of elutable S/IVOCs was estimated to be on the order of 0.7 µg m –3 and their volatility distributions are estimated for modeling aerosol formation chemistry.« less

  19. What develops during emotional development? A component process approach to identifying sources of psychopathology risk in adolescence

    PubMed Central

    McLaughlin, Katie A.; Garrad, Megan C.; Somerville, Leah H.

    2015-01-01

    Adolescence is a phase of the lifespan associated with widespread changes in emotional behavior thought to reflect both changing environments and stressors, and psychological and neurobiological development. However, emotions themselves are complex phenomena that are composed of multiple subprocesses. In this paper, we argue that examining emotional development from a process-level perspective facilitates important insights into the mechanisms that underlie adolescents' shifting emotions and intensified risk for psychopathology. Contrasting the developmental progressions for the antecedents to emotion, physiological reactivity to emotion, emotional regulation capacity, and motivation to experience particular affective states reveals complex trajectories that intersect in a unique way during adolescence. We consider the implications of these intersecting trajectories for negative outcomes such as psychopathology, as well as positive outcomes for adolescent social bonds. PMID:26869841

  20. Automatic latency equalization in VHDL-implemented complex pipelined systems

    NASA Astrophysics Data System (ADS)

    Zabołotny, Wojciech M.

    2016-09-01

    In the pipelined data processing systems it is very important to ensure that parallel paths delay data by the same number of clock cycles. If that condition is not met, the processing blocks receive data not properly aligned in time and produce incorrect results. Manual equalization of latencies is a tedious and error-prone work. This paper presents an automatic method of latency equalization in systems described in VHDL. The proposed method uses simulation to measure latencies and verify introduced correction. The solution is portable between different simulation and synthesis tools. The method does not increase the complexity of the synthesized design comparing to the solution based on manual latency adjustment. The example implementation of the proposed methodology together with a simple design demonstrating its use is available as an open source project under BSD license.

  1. Beyond Pink and Blue: The Complexity of Early Androgen Effects on Gender Development.

    PubMed

    Berenbaum, Sheri A

    2018-03-01

    Why do girls and women differ from boys and men? Gender development is typically considered to result from socialization, but sex hormones present during sensitive periods of development, particularly prenatal androgens, play an important role. Data from natural experiments, especially from females with congenital adrenal hyperplasia, show the complexity of the effects of androgens on behavior: Prenatal androgens apparently have large effects on interests and engagement in gendered activities; moderate effects on spatial abilities; and relatively small or no effects on gender identity, gender cognitions, and gendered peer involvement. These differential effects provide an opportunity to move beyond identifying sources of variation in behavior to understanding developmental processes. These processes include links among gendered characteristics, psychological and neural mechanisms underlying development, and the joint effects of biological predispositions and social experiences.

  2. Visual Modelling of Data Warehousing Flows with UML Profiles

    NASA Astrophysics Data System (ADS)

    Pardillo, Jesús; Golfarelli, Matteo; Rizzi, Stefano; Trujillo, Juan

    Data warehousing involves complex processes that transform source data through several stages to deliver suitable information ready to be analysed. Though many techniques for visual modelling of data warehouses from the static point of view have been devised, only few attempts have been made to model the data flows involved in a data warehousing process. Besides, each attempt was mainly aimed at a specific application, such as ETL, OLAP, what-if analysis, data mining. Data flows are typically very complex in this domain; for this reason, we argue, designers would greatly benefit from a technique for uniformly modelling data warehousing flows for all applications. In this paper, we propose an integrated visual modelling technique for data cubes and data flows. This technique is based on UML profiling; its feasibility is evaluated by means of a prototype implementation.

  3. Current and new developments in transport and regulatory issues concerning radioisotopes: managing change for minimum business impact

    NASA Astrophysics Data System (ADS)

    Bennett, Neil; Coppell, David; Rogers, David; Schrader, John

    2004-09-01

    Changes in the regulatory framework governing the Radiation Processing Industry have the potential to make a real business impact on day-to-day profitability. Many areas of the Radiation Processing Industry are affected by changes in the regulatory framework within which these areas are managed. When planning for such changes the transportation element in the shipment of sealed cobalt radiation sources is an area that is often neglected by some parts of the distribution chain. A balance must be struck between the cobalt supplier and the facility operator/customer that rests upon how much the customer needs to know about the intricacies of cobalt shipment. The objective of this paper is to highlight areas of possible business impact and reassure the users of sealed radiation sources that the global suppliers of these products are used to negotiating local variations in regulations governing the physical transportation of radiation sources, changes in regulations governing the design, manufacture and use of transportation containers and changes in the availability of commercial shippers and shipping routes. The major suppliers of industrial quantities of cobalt-60 are well placed to lead their customers through this complex process as a matter of routine.

  4. Forest Resources, Chiefdoms and Mortuary Practices in the Neotropics: Preliminary Archaeobotanical Analysis from El Caño Funerary Complex (Coclé Province, Panamá)

    NASA Astrophysics Data System (ADS)

    Martín Seijo, M.; Torné, J. Mayo; Torné, C. Mayo; Huerta, R. Piqué i.

    2012-04-01

    El Caño site is situated on the Pacific side of Panamá, near the Río Grande. It's a funerary complex comprising different types of structures (stone structures -basalt columns, groups of sculptures and a causeway-; earthen mounds and canals; burials). The excavations supervised by Julia Mayo between 2008 and 2011 allowed to discover several lavish burials estimated to date between 700 and 1000 AD (Mayo & Mayo 2012). The data recovered has served as source of information for the pre-Columbian chiefdoms and their mortuary practices. There was carried out a detailed taphonomic study to register the complex formation processes of these burial deposits, and the significant post-depositional transformations (anthropogenic and natural processes) (Mayo & Mayo in press). Also during the excavations were recovered archaeobotanical samples; most of them charcoals. The laboratory work process consisted in the exhaustive description of the anatomical features of the different taxa identified during the charcoal analyses (the identification level varied from species to family, although in several case we couldn't propose any taxa). These samples were concentrated in structures, and in few cases dispersed in the sediment. Some of the samples analyzed were large pieces of charcoal of the wooden beams from ancient wood structures, and other pieces of charcoal and vegetable fibers were related directly with the burial practices. The charcoal analysis results aim to contribute to the knowledge of the exploitation of forest resources, of the territories where these resources were collected, and of the production process (chaîne opératoire). These results were complemented with the exhaustive review of written sources (spanish chronicles), ethnobotanical studies and archaeological data of other sites in this area. Acknowledgements: This research was developed under El Caño Archaeological Project and was funded by SENACYT (Secretaría Nacional de Ciencia y Tecnología de Panamá).

  5. Understanding the Influence of the Complex Relationships among Informal and Formal Supports on the Well-Being of Caregivers of Persons with Dementia

    ERIC Educational Resources Information Center

    Raina, Parminder; McIntyre, Chris; Zhu, Bin; McDowell, Ian; Santaguida, Pasqualina; Kristjansson, Betsy; Hendricks, Alexandra; Massfeller, Helen; Chambers, Larry

    2004-01-01

    This study examined the direct and indirect relationships between caring for a person with dementia and caregiver health. A conceptual model of the caregiver stress process considered informal caregiver characteristics, sources of caregiver stress, and the influence of informal and formal support on the well-being of the caregivers of persons with…

  6. Electrodeposition of amorphous ternary nickel-chromium-phosphorus alloy

    DOEpatents

    Guilinger, Terry R.

    1990-01-01

    Amorphous ternary nickel-chromium-phosphorus alloys are electrodeposited from a bath comprising a nickel salt, a chromium salt, a phosphorus source such as sodium hypophosphite, a complexing agent for the nickel ions, supporting salts to increase conductivity, and a buffering agent. The process is carried out at about room temperature and requires a current density between about 20 to 40 A/dm.sup.2.

  7. Defining the best quality-control systems by design and inspection.

    PubMed

    Hinckley, C M

    1997-05-01

    Not all of the many approaches to quality control are equally effective. Nonconformities in laboratory testing are caused basically by excessive process variation and mistakes. Statistical quality control can effectively control process variation, but it cannot detect or prevent most mistakes. Because mistakes or blunders are frequently the dominant source of nonconformities, we conclude that statistical quality control by itself is not effective. I explore the 100% inspection methods essential for controlling mistakes. Unlike the inspection techniques that Deming described as ineffective, the new "source" inspection methods can detect mistakes and enable corrections before nonconformities are generated, achieving the highest degree of quality at a fraction of the cost of traditional methods. Key relationships between task complexity and nonconformity rates are also described, along with cultural changes that are essential for implementing the best quality-control practices.

  8. Broadband enhancement of single photon emission and polarization dependent coupling in silicon nitride waveguides.

    PubMed

    Bisschop, Suzanne; Guille, Antoine; Van Thourhout, Dries; Hens, Zeger; Brainis, Edouard

    2015-06-01

    Single-photon (SP) sources are important for a number of optical quantum information processing applications. We study the possibility to integrate triggered solid-state SP emitters directly on a photonic chip. A major challenge consists in efficiently extracting their emission into a single guided mode. Using 3D finite-difference time-domain simulations, we investigate the SP emission from dipole-like nanometer-sized inclusions embedded into different silicon nitride (SiNx) photonic nanowire waveguide designs. We elucidate the effect of the geometry on the emission lifetime and the polarization of the emitted SP. The results show that highly efficient and polarized SP sources can be realized using suspended SiNx slot-waveguides. Combining this with the well-established CMOS-compatible processing technology, fully integrated and complex optical circuits for quantum optics experiments can be developed.

  9. Visible light photocatalysis as a greener approach to photochemical synthesis.

    PubMed

    Yoon, Tehshik P; Ischay, Michael A; Du, Juana

    2010-07-01

    Light can be considered an ideal reagent for environmentally friendly, 'green' chemical synthesis; unlike many conventional reagents, light is non-toxic, generates no waste, and can be obtained from renewable sources. Nevertheless, the need for high-energy ultraviolet radiation in most organic photochemical processes has limited both the practicality and environmental benefits of photochemical synthesis on industrially relevant scales. This perspective describes recent approaches to the use of metal polypyridyl photocatalysts in synthetic organic transformations. Given the remarkable photophysical properties of these complexes, these new transformations, which use Ru(bpy)(3)(2+) and related photocatalysts, can be conducted using almost any source of visible light, including both store-bought fluorescent light bulbs and ambient sunlight. Transition metal photocatalysis thus represents a promising strategy towards the development of practical, scalable industrial processes with great environmental benefits.

  10. Performance of pond-wetland complexes as a preliminary processor of drinking water sources.

    PubMed

    Wang, Weidong; Zheng, Jun; Wang, Zhongqiong; Zhang, Rongbin; Chen, Qinghua; Yu, Xinfeng; Yin, Chengqing

    2016-01-01

    Shijiuyang Constructed Wetland (110 hm(2)) is a drinking water source treatment wetland with primary structural units of ponds and plant-bed/ditch systems. The wetland can process about 250,000 tonnes of source water in the Xincheng River every day and supplies raw water for Shijiuyang Drinking Water Plant. Daily data for 28 months indicated that the major water quality indexes of source water had been improved by one grade. The percentage increase for dissolved oxygen and the removal rates of ammonia nitrogen, iron and manganese were 73.63%, 38.86%, 35.64%, and 22.14% respectively. The treatment performance weight of ponds and plant-bed/ditch systems was roughly equal but they treated different pollutants preferentially. Most water quality indexes had better treatment efficacy with increasing temperature and inlet concentrations. These results revealed that the pond-wetland complexes exhibited strong buffering capacity for source water quality improvement. The treatment cost of Shijiuyang Drinking Water Plant was reduced by about 30.3%. Regional rainfall significantly determined the external river water levels and adversely deteriorated the inlet water quality, thus suggesting that the "hidden" diffuse pollution in the multitudinous stream branches as well as their catchments should be the controlling emphases for river source water protection in the future. The combination of pond and plant-bed/ditch systems provides a successful paradigm for drinking water source pretreatment. Three other drinking water source treatment wetlands with ponds and plant-bed/ditch systems are in operation or construction in the stream networks of the Yangtze River Delta and more people will be benefited. Copyright © 2015. Published by Elsevier B.V.

  11. Applications of open-path Fourier transform infrared for identification of volatile organic compound pollution sources and characterization of source emission behaviors.

    PubMed

    Lin, Chitsan; Liou, Naiwei; Sun, Endy

    2008-06-01

    An open-path Fourier transform infrared spectroscopy (OP-FTIR) system was set up for 3-day continuous line-averaged volatile organic compound (VOC) monitoring in a paint manufacturing plant. Seven VOCs (toluene, m-xylene, p-xylene, styrene, methanol, acetone, and 2-butanone) were identified in the ambient environment. Daytime-only batch operation mode was well explained by the time-series concentration plots. Major sources of methanol, m-xylene, acetone, and 2-butanone were identified in the southeast direction where paint solvent manufacturing processes are located. However, an attempt to uncover sources of styrene was not successful because the method detection limit (MDL) of the OP-FTIR system was not sensitive enough to produce conclusive data. In the second scenario, the OP-FTIR system was set up in an industrial complex to distinguish the origins of several VOCs. Eight major VOCs were identified in the ambient environment. The pollutant detected wind-rose percentage plots that clearly showed that ethylene, propylene, 2-butanone, and toluene mainly originated from the tank storage area, whereas the source of n-butane was mainly from the butadiene manufacturing processes of the refinery plant, and ammonia was identified as an accompanying reduction product in the gasoline desulfuration process. Advantages of OP-FTIR include its ability to simultaneously and continuously analyze many compounds, and its long path length monitoring has also shown advantages in obtaining more comprehensive data than the traditional multiple, single-point monitoring methods.

  12. Numerical modeling of landslides and generated seismic waves: The Bingham Canyon Mine landslides

    NASA Astrophysics Data System (ADS)

    Miallot, H.; Mangeney, A.; Capdeville, Y.; Hibert, C.

    2016-12-01

    Landslides are important natural hazards and key erosion processes. They create long period surface waves that can be recorded by regional and global seismic networks. The seismic signals are generated by acceleration/deceleration of the mass sliding over the topography. They consist in a unique and powerful tool to detect, characterize and quantify the landslide dynamics. We investigate here the processes at work during the two massive landslides that struck the Bingham Canyon Mine on the 10th April 2013. We carry a combined analysis of the generated seismic signals and the landslide processes computed with a 3D modeling on a complex topography. Forces computed by broadband seismic waveform inversion are used to constrain the study and particularly the force-source and the bulk dynamic. The source time function are obtained by a 3D model (Shaltop) where rheological parameters can be adjusted. We first investigate the influence of the initial shape of the sliding mass which strongly affects the whole landslide dynamic. We also see that the initial shape of the source mass of the first landslide constrains pretty well the second landslide source mass. We then investigate the effect of a rheological parameter, the frictional angle, that strongly influences the resulted computed seismic source function. We test here numerous friction laws as the frictional Coulomb law and a velocity-weakening friction law. Our results show that the force waveform fitting the observed data is highly variable depending on these different choices.

  13. An adaptable architecture for patient cohort identification from diverse data sources.

    PubMed

    Bache, Richard; Miles, Simon; Taweel, Adel

    2013-12-01

    We define and validate an architecture for systems that identify patient cohorts for clinical trials from multiple heterogeneous data sources. This architecture has an explicit query model capable of supporting temporal reasoning and expressing eligibility criteria independently of the representation of the data used to evaluate them. The architecture has the key feature that queries defined according to the query model are both pre and post-processed and this is used to address both structural and semantic heterogeneity. The process of extracting the relevant clinical facts is separated from the process of reasoning about them. A specific instance of the query model is then defined and implemented. We show that the specific instance of the query model has wide applicability. We then describe how it is used to access three diverse data warehouses to determine patient counts. Although the proposed architecture requires greater effort to implement the query model than would be the case for using just SQL and accessing a data-based management system directly, this effort is justified because it supports both temporal reasoning and heterogeneous data sources. The query model only needs to be implemented once no matter how many data sources are accessed. Each additional source requires only the implementation of a lightweight adaptor. The architecture has been used to implement a specific query model that can express complex eligibility criteria and access three diverse data warehouses thus demonstrating the feasibility of this approach in dealing with temporal reasoning and data heterogeneity.

  14. ART AND SCIENCE OF IMAGE MAPS.

    USGS Publications Warehouse

    Kidwell, Richard D.; McSweeney, Joseph A.

    1985-01-01

    The visual image of reflected light is influenced by the complex interplay of human color discrimination, spatial relationships, surface texture, and the spectral purity of light, dyes, and pigments. Scientific theories of image processing may not always achieve acceptable results as the variety of factors, some psychological, are in part, unpredictable. Tonal relationships that affect digital image processing and the transfer functions used to transform from the continuous-tone source image to a lithographic image, may be interpreted for an insight of where art and science fuse in the production process. The application of art and science in image map production at the U. S. Geological Survey is illustrated and discussed.

  15. Modeling MultiCoil ICPs

    NASA Astrophysics Data System (ADS)

    Kolobov, V. I.; Vaidya, N.; Krishnan, A.

    1998-10-01

    Plasma processing of 300 mm wafers and flat panels places stringent demands on plasma uniformity across large surfaces. A natural solution towards an uniform plasma in a minimum discharge volume is to maintain the plasma by an array of individual sources. Although the design of the individual sources can differ considerably, there is a common feature for all such devices which have been recently suggested by several groups: their essentially 3D geometry. Engineering design of these devices is a challenging task and computational modeling could be a very useful tool. CFD Research Corp. has developed a comprehensive software for virtual prototyping of ICP sources designed for complex 3D geometries with unstructured solution-adaptive mesh. In this paper we shall present the results of our simulation of the multipole high density source [1] which is an example of MultiCoil ICP. We shall describe the procedure of solving the electromagnetic part of the problem using magnetic vector potential and analyse design issues such as the size of dielectric windows. We shall present results of parametric studies of the source for different geometries, gas pressures and plasma densities for simple argon chemistry. [1] J.Ogle. Proc. VI Int. Workshop on Advanced Plasma Tools and Process Engineering, pp. 85-90, May 1998, Millbrae, USA.

  16. Hearing in three dimensions

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara

    2003-04-01

    One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.

  17. Computer Forensics Education - the Open Source Approach

    NASA Astrophysics Data System (ADS)

    Huebner, Ewa; Bem, Derek; Cheung, Hon

    In this chapter we discuss the application of the open source software tools in computer forensics education at tertiary level. We argue that open source tools are more suitable than commercial tools, as they provide the opportunity for students to gain in-depth understanding and appreciation of the computer forensic process as opposed to familiarity with one software product, however complex and multi-functional. With the access to all source programs the students become more than just the consumers of the tools as future forensic investigators. They can also examine the code, understand the relationship between the binary images and relevant data structures, and in the process gain necessary background to become the future creators of new and improved forensic software tools. As a case study we present an advanced subject, Computer Forensics Workshop, which we designed for the Bachelor's degree in computer science at the University of Western Sydney. We based all laboratory work and the main take-home project in this subject on open source software tools. We found that without exception more than one suitable tool can be found to cover each topic in the curriculum adequately. We argue that this approach prepares students better for forensic field work, as they gain confidence to use a variety of tools, not just a single product they are familiar with.

  18. Toward a Network Model of MHC Class II-Restricted Antigen Processing

    PubMed Central

    Miller, Michael A.; Ganesan, Asha Purnima V.; Eisenlohr, Laurence C.

    2013-01-01

    The standard model of Major Histocompatibility Complex class II (MHCII)-restricted antigen processing depicts a straightforward, linear pathway: internalized antigens are converted into peptides that load in a chaperone dependent manner onto nascent MHCII in the late endosome, the complexes subsequently trafficking to the cell surface for recognition by CD4+ T cells (TCD4+). Several variations on this theme, both moderate and radical, have come to light but these alternatives have remained peripheral, the conventional pathway generally presumed to be the primary driver of TCD4+ responses. Here we continue to press for the conceptual repositioning of these alternatives toward the center while proposing that MHCII processing be thought of less in terms of discrete pathways and more in terms of a network whose major and minor conduits are variable depending upon many factors, including the epitope, the nature of the antigen, the source of the antigen, and the identity of the antigen-presenting cell. PMID:24379819

  19. Assimilation of cyanide and cyano-derivatives by Pseudomonas pseudoalcaligenes CECT5344: from omic approaches to biotechnological applications

    PubMed Central

    Cabello, Purificación; Luque-Almagro, Víctor M; Olaya-Abril, Alfonso; Sáez, Lara P; Moreno-Vivián, Conrado; Roldán, M Dolores

    2018-01-01

    Abstract Mining, jewellery and metal-processing industries use cyanide for extracting gold and other valuable metals, generating large amounts of highly toxic wastewater. Biological treatments may be a clean alternative under the environmental point of view to the conventional physical or chemical processes used to remove cyanide and related compounds from these industrial effluents. Pseudomonas pseudoalcaligenes CECT5344 can grow under alkaline conditions using cyanide, cyanate or different nitriles as the sole nitrogen source, and is able to remove up to 12 mM total cyanide from a jewellery industry wastewater that contains cyanide free and complexed to metals. Complete genome sequencing of this bacterium has allowed the application of transcriptomic and proteomic techniques, providing a holistic view of the cyanide biodegradation process. The complex response to cyanide by the cyanotrophic bacterium P. pseudoalcaligenes CECT5344 and the potential biotechnological applications of this model organism in the bioremediation of cyanide-containing industrial residues are reviewed. PMID:29438505

  20. Assimilation of cyanide and cyano-derivatives by Pseudomonas pseudoalcaligenes CECT5344: from omic approaches to biotechnological applications.

    PubMed

    Cabello, Purificación; Luque-Almagro, Víctor M; Olaya-Abril, Alfonso; Sáez, Lara P; Moreno-Vivián, Conrado; Roldán, M Dolores

    2018-03-01

    Mining, jewellery and metal-processing industries use cyanide for extracting gold and other valuable metals, generating large amounts of highly toxic wastewater. Biological treatments may be a clean alternative under the environmental point of view to the conventional physical or chemical processes used to remove cyanide and related compounds from these industrial effluents. Pseudomonas pseudoalcaligenes CECT5344 can grow under alkaline conditions using cyanide, cyanate or different nitriles as the sole nitrogen source, and is able to remove up to 12 mM total cyanide from a jewellery industry wastewater that contains cyanide free and complexed to metals. Complete genome sequencing of this bacterium has allowed the application of transcriptomic and proteomic techniques, providing a holistic view of the cyanide biodegradation process. The complex response to cyanide by the cyanotrophic bacterium P. pseudoalcaligenes CECT5344 and the potential biotechnological applications of this model organism in the bioremediation of cyanide-containing industrial residues are reviewed.

  1. A Study about Kalman Filters Applied to Embedded Sensors

    PubMed Central

    Valade, Aurélien; Acco, Pascal; Grabolosa, Pierre; Fourniols, Jean-Yves

    2017-01-01

    Over the last decade, smart sensors have grown in complexity and can now handle multiple measurement sources. This work establishes a methodology to achieve better estimates of physical values by processing raw measurements within a sensor using multi-physical models and Kalman filters for data fusion. A driving constraint being production cost and power consumption, this methodology focuses on algorithmic complexity while meeting real-time constraints and improving both precision and reliability despite low power processors limitations. Consequently, processing time available for other tasks is maximized. The known problem of estimating a 2D orientation using an inertial measurement unit with automatic gyroscope bias compensation will be used to illustrate the proposed methodology applied to a low power STM32L053 microcontroller. This application shows promising results with a processing time of 1.18 ms at 32 MHz with a 3.8% CPU usage due to the computation at a 26 Hz measurement and estimation rate. PMID:29206187

  2. Semantic-Based Knowledge Management in E-Government: Modeling Attention for Proactive Information Delivery

    NASA Astrophysics Data System (ADS)

    Samiotis, Konstantinos; Stojanovic, Nenad

    E-government has become almost synonymous with a consumer-led revolution of government services inspired and made possible by the Internet. With technology being the least of the worries for government organizations nowadays, attention is shifting towards managing complexity as one of the basic antecedents of operational and decision-making inefficiency. Complexity has been traditionally preoccupying public administrations and owes its origins to several sources. Among them we encounter primarily the cross-functional nature and the degree of legal structuring of administrative work. Both of them have strong reliance to the underlying process and information infrastructure of public organizations. Managing public administration work thus implies managing its processes and information. Knowledge management (KM) and business process reengineering (BPR) have been deployed already by private organizations with success for the same purposes and certainly comprise improvement practices that are worthwhile investigating. Our contribution through this paper is on the utilization of KM for the e-government.

  3. Stochastic production phase design for an open pit mining complex with multiple processing streams

    NASA Astrophysics Data System (ADS)

    Asad, Mohammad Waqar Ali; Dimitrakopoulos, Roussos; van Eldert, Jeroen

    2014-08-01

    In a mining complex, the mine is a source of supply of valuable material (ore) to a number of processes that convert the raw ore to a saleable product or a metal concentrate for production of the refined metal. In this context, expected variation in metal content throughout the extent of the orebody defines the inherent uncertainty in the supply of ore, which impacts the subsequent ore and metal production targets. Traditional optimization methods for designing production phases and ultimate pit limit of an open pit mine not only ignore the uncertainty in metal content, but, in addition, commonly assume that the mine delivers ore to a single processing facility. A stochastic network flow approach is proposed that jointly integrates uncertainty in supply of ore and multiple ore destinations into the development of production phase design and ultimate pit limit. An application at a copper mine demonstrates the intricacies of the new approach. The case study shows a 14% higher discounted cash flow when compared to the traditional approach.

  4. Spontaneous brain activity as a source of ideal 1/f noise

    NASA Astrophysics Data System (ADS)

    Allegrini, Paolo; Menicucci, Danilo; Bedini, Remo; Fronzoni, Leone; Gemignani, Angelo; Grigolini, Paolo; West, Bruce J.; Paradisi, Paolo

    2009-12-01

    We study the electroencephalogram (EEG) of 30 closed-eye awake subjects with a technique of analysis recently proposed to detect punctual events signaling rapid transitions between different metastable states. After single-EEG-channel event detection, we study global properties of events simultaneously occurring among two or more electrodes termed coincidences. We convert the coincidences into a diffusion process with three distinct rules that can yield the same μ only in the case where the coincidences are driven by a renewal process. We establish that the time interval between two consecutive renewal events driving the coincidences has a waiting-time distribution with inverse power-law index μ≈2 corresponding to ideal 1/f noise. We argue that this discovery, shared by all subjects of our study, supports the conviction that 1/f noise is an optimal communication channel for complex networks as in art or language and may therefore be the channel through which the brain influences complex processes and is influenced by them.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodgers, Theron M.; Madison, Jonathan D.; Tikare, Veena

    Additive manufacturing (AM) is of tremendous interest given its ability to realize complex, non-traditional geometries in engineered structural materials. But, microstructures generated from AM processes can be equally, if not more, complex than their conventionally processed counterparts. While some microstructural features observed in AM may also occur in more traditional solidification processes, the introduction of spatially and temporally mobile heat sources can result in significant microstructural heterogeneity. While grain size and shape in metal AM structures are understood to be highly dependent on both local and global temperature profiles, the exact form of this relation is not well understood. Wemore » implement an idealized molten zone and temperature-dependent grain boundary mobility in a kinetic Monte Carlo model to predict three-dimensional grain structure in additively manufactured metals. In order to demonstrate the flexibility of the model, synthetic microstructures are generated under conditions mimicking relatively diverse experimental results present in the literature. Simulated microstructures are then qualitatively and quantitatively compared to their experimental complements and are shown to be in good agreement.« less

  6. Combined qualitative and quantitative research designs.

    PubMed

    Seymour, Jane

    2012-12-01

    Mixed methods research designs have been recognized as important in addressing complexity and are recommended particularly in the development and evaluation of complex interventions. This article reports a review of studies in palliative care published between 2010 and March 2012 that combine qualitative and quantitative approaches. A synthesis of approaches to mixed methods research taken in 28 examples of published research studies of relevance to palliative and supportive care is provided, using a typology based on a classic categorization put forward in 1992. Mixed-method studies are becoming more frequently employed in palliative care research and resonate with the complexity of the palliative care endeavour. Undertaking mixed methods research requires a sophisticated understanding of the research process and recognition of some of the underlying complexities encountered when working with different traditions and perspectives on issues of: sampling, validity, reliability and rigour, different sources of data and different data collection and analysis techniques.

  7. Coal Formation and Geochemistry

    NASA Astrophysics Data System (ADS)

    Orem, W. H.; Finkelman, R. B.

    2003-12-01

    Coal is one of the most complex and challenging natural materials to analyze and to understand. Unlike most rocks, which consist predominantly of crystalline mineral grains, coal is largely an assemblage of amorphous, degraded plant remains metamorphosed to various degrees and intermixed with a generous sprinkling of minute syngenetic, diagenetic, epigenetic, and detrital mineral grains, and containing within its structure various amounts of water, oils, and gases. Each coal is unique, having been derived from different plant sources over geologic time, having experienty -45ced different thermal histories, and having been exposed to varying geologic processes. This diversity presents a challenge to constructing a coherent picture of coal geochemistry and the processes that influence the chemical composition of coal.Despite the challenge coal presents to geochemists, a thorough understanding of the chemistry and geology of this complex natural substance is essential because of its importance to our society. Coal is, and will remain for sometime, a crucial source of energy for the US and for many other countries (Figure 1). In the USA, more than half of the electricity is generated by coal-fired power plants, and almost 90% of the coal mined in the USA is sold for electricity generation (Pierce et al., 1996). It is also an important source of coke for steel production, chemicals, pharmaceuticals, and even perfumes ( Schobert, 1987). It may also, in some cases, be an economic source of various mineral commodities. The utilization of coal through mining, transport, storage, combustion, and the disposal of the combustion by-products, also presents a challenge to geochemists because of the wide range of environmental and human health problems arising from these activities. The sound and effective use of coal as a natural resource requires a better understanding of the geochemistry of coal, i.e., the chemical and mineralogical characteristics of the coal that control its technological behavior, by-product characteristics, and environmental and human health impacts. In this chapter, we will try to make geochemical sense of this wonderfully complex and important resource. (5K)Figure 1. Photograph of a low rank coal bed (lignite of Pliocene age) from southwestern Romania.

  8. Melanoma cells present high levels of HLA-A2-tyrosinase in association with instability and aberrant intracellular processing of tyrosinase.

    PubMed

    Michaeli, Yael; Sinik, Keren; Haus-Cohen, Maya; Reiter, Yoram

    2012-04-01

    Short-lived protein translation products are proposed to be a major source of substrates for major histocompatibility complex (MHC) class I antigen processing and presentation; however, a direct link between protein stability and the presentation level of MHC class I-peptide complexes has not been made. We have recently discovered that the peptide Tyr((369-377)) , derived from the tyrosinase protein is highly presented by HLA-A2 on the surface of melanoma cells. To examine the molecular mechanisms responsible for this presentation, we compared characteristics of tyrosinase in melanoma cells lines that present high or low levels of HLA-A2-Tyr((369-377)) complexes. We found no correlation between mRNA levels and the levels of HLA-A2-Tyr((369-377)) presentation. Co-localization experiments revealed that, in cell lines presenting low levels of HLA-A2-Tyr((369-377)) complexes, tyrosinase co-localizes with LAMP-1, a melanosome marker, whereas in cell lines presenting high HLA-A2-Tyr((369-377)) levels, tyrosinase localizes to the endoplasmic reticulum. We also observed differences in tyrosinase molecular weight and glycosylation composition as well as major differences in protein stability (t(1/2) ). By stabilizing the tyrosinase protein, we observed a dramatic decrease in HLA-A2-tyrosinase presentation. Our findings suggest that aberrant processing and instability of tyrosinase are responsible for the high presentation of HLA-A2-Tyr((369-377)) complexes and thus shed new light on the relationship between intracellular processing, stability of proteins, and MHC-restricted peptide presentation. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. TMSEEG: A MATLAB-Based Graphical User Interface for Processing Electrophysiological Signals during Transcranial Magnetic Stimulation.

    PubMed

    Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C; Wong, Willy; Daskalakis, Zafiris J; Farzan, Faranak

    2016-01-01

    Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research.

  10. TMSEEG: A MATLAB-Based Graphical User Interface for Processing Electrophysiological Signals during Transcranial Magnetic Stimulation

    PubMed Central

    Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C.; Wong, Willy; Daskalakis, Zafiris J.; Farzan, Faranak

    2016-01-01

    Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research. PMID:27774054

  11. MRUniNovo: an efficient tool for de novo peptide sequencing utilizing the hadoop distributed computing framework.

    PubMed

    Li, Chuang; Chen, Tao; He, Qiang; Zhu, Yunping; Li, Kenli

    2017-03-15

    Tandem mass spectrometry-based de novo peptide sequencing is a complex and time-consuming process. The current algorithms for de novo peptide sequencing cannot rapidly and thoroughly process large mass spectrometry datasets. In this paper, we propose MRUniNovo, a novel tool for parallel de novo peptide sequencing. MRUniNovo parallelizes UniNovo based on the Hadoop compute platform. Our experimental results demonstrate that MRUniNovo significantly reduces the computation time of de novo peptide sequencing without sacrificing the correctness and accuracy of the results, and thus can process very large datasets that UniNovo cannot. MRUniNovo is an open source software tool implemented in java. The source code and the parameter settings are available at http://bioinfo.hupo.org.cn/MRUniNovo/index.php. s131020002@hnu.edu.cn ; taochen1019@163.com. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  12. Photocatalytic Oxidation of Oil Contaminated Water Using TiO2/UV

    NASA Astrophysics Data System (ADS)

    Vargas Solla, Monica; Romero Rojas, Jairo

    2017-04-01

    Currently, oil is one of the most used energy sources all around the world, for example to make motor engines work. That prevailing usage of oil is the reason why water sources are under serious pollution risks with compounds that are hard to remove, such as hydrocarbons. There are a few water treatment processes known as Advanced Oxidation Processes, which search for a way to treat polluted water with toxic refractory compounds, to make its reuse more feasible and to avoid or at least appease the injurious effects of pollution over ecosystems. A heterogeneous photocatalysis water treatment technology, sorted as an Advanced Oxidation Process, which is intended to treat refractory compound polluted water by the use of TiO2 and UV light, is presented in this investigation. The evidence about its efficiency in hydrocarbon removal from used motor oil polluted water, since it is an extremely important pollutant due to its complexity, toxicity and recalcitrant characteristics, is also presented through COD, Oil and Grease and Hydrocarbons analysis.

  13. Discourse comprehension in L2: Making sense of what is not explicitly said.

    PubMed

    Foucart, Alice; Romero-Rivas, Carlos; Gort, Bernharda Lottie; Costa, Albert

    2016-12-01

    Using ERPs, we tested whether L2 speakers can integrate multiple sources of information (e.g., semantic, pragmatic information) during discourse comprehension. We presented native speakers and L2 speakers with three-sentence scenarios in which the final sentence was highly causally related, intermediately related, or causally unrelated to its context; its interpretation therefore required simple or complex inferences. Native speakers revealed a gradual N400-like effect, larger in the causally unrelated condition than in the highly related condition, and falling in-between in the intermediately related condition, replicating previous results. In the crucial intermediately related condition, L2 speakers behaved like native speakers, however, showing extra processing in a later time-window. Overall, the results show that, when reading, L2 speakers are able to process information from the local context and prior information (e.g., world knowledge) to build global coherence, suggesting that they process different sources of information to make inferences online during discourse comprehension, like native speakers. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Accounting for Fault Roughness in Pseudo-Dynamic Ground-Motion Simulations

    NASA Astrophysics Data System (ADS)

    Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.; Dunham, Eric M.

    2017-09-01

    Geological faults comprise large-scale segmentation and small-scale roughness. These multi-scale geometrical complexities determine the dynamics of the earthquake rupture process, and therefore affect the radiated seismic wavefield. In this study, we examine how different parameterizations of fault roughness lead to variability in the rupture evolution and the resulting near-fault ground motions. Rupture incoherence naturally induced by fault roughness generates high-frequency radiation that follows an ω-2 decay in displacement amplitude spectra. Because dynamic rupture simulations are computationally expensive, we test several kinematic source approximations designed to emulate the observed dynamic behavior. When simplifying the rough-fault geometry, we find that perturbations in local moment tensor orientation are important, while perturbations in local source location are not. Thus, a planar fault can be assumed if the local strike, dip, and rake are maintained. We observe that dynamic rake angle variations are anti-correlated with the local dip angles. Testing two parameterizations of dynamically consistent Yoffe-type source-time function, we show that the seismic wavefield of the approximated kinematic ruptures well reproduces the radiated seismic waves of the complete dynamic source process. This finding opens a new avenue for an improved pseudo-dynamic source characterization that captures the effects of fault roughness on earthquake rupture evolution. By including also the correlations between kinematic source parameters, we outline a new pseudo-dynamic rupture modeling approach for broadband ground-motion simulation.

  15. Earthquake Source Inversion Blindtest: Initial Results and Further Developments

    NASA Astrophysics Data System (ADS)

    Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.

    2007-12-01

    Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and reliability of current inversion methods and to discuss future developments.

  16. Precisely controlled fabrication, manipulation and in-situ analysis of Cu based nanoparticles.

    PubMed

    Martínez, L; Lauwaet, K; Santoro, G; Sobrado, J M; Peláez, R J; Herrero, V J; Tanarro, I; Ellis, G J; Cernicharo, J; Joblin, C; Huttel, Y; Martín-Gago, J A

    2018-05-08

    The increasing demand for nanostructured materials is mainly motivated by their key role in a wide variety of technologically relevant fields such as biomedicine, green sustainable energy or catalysis. We have succeeded to scale-up a type of gas aggregation source, called a multiple ion cluster source, for the generation of complex, ultra-pure nanoparticles made of different materials. The high production rates achieved (tens of g/day) for this kind of gas aggregation sources, and the inherent ability to control the structure of the nanoparticles in a controlled environment, make this equipment appealing for industrial purposes, a highly coveted aspect since the introduction of this type of sources. Furthermore, our innovative UHV experimental station also includes in-flight manipulation and processing capabilities by annealing, acceleration, or interaction with background gases along with in-situ characterization of the clusters and nanoparticles fabricated. As an example to demonstrate some of the capabilities of this new equipment, herein we present the fabrication of copper nanoparticles and their processing, including the controlled oxidation (from Cu 0 to CuO through Cu 2 O, and their mixtures) at different stages in the machine.

  17. Combining chemometric tools for assessing hazard sources and factors acting simultaneously in contaminated areas. Case study: "Mar Piccolo" Taranto (South Italy).

    PubMed

    Mali, Matilda; Dell'Anna, Maria Michela; Notarnicola, Michele; Damiani, Leonardo; Mastrorilli, Piero

    2017-10-01

    Almost all marine coastal ecosystems possess complex structural and dynamic characteristics, which are influenced by anthropogenic causes and natural processes as well. Revealing the impact of sources and factors controlling the spatial distributions of contaminants within highly polluted areas is a fundamental propaedeutic step of their quality evaluation. Combination of different pattern recognition techniques, applied to one of the most polluted Mediterranean coastal basin, resulted in a more reliable hazard assessment. PCA/CA and factorial ANOVA were exploited as complementary techniques for apprehending the impact of multi-sources and multi-factors acting simultaneously and leading to similarities or differences in the spatial contamination pattern. The combination of PCA/CA and factorial ANOVA allowed, on one hand to determine the main processes and factors controlling the contamination trend within different layers and different basins, and, on the other hand, to ascertain possible synergistic effects. This approach showed the significance of a spatially representative overview given by the combination of PCA-CA/ANOVA in inferring the historical anthropogenic sources loading on the area. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Within-storm and Seasonal Differences in Particulate Organic Material Composition and Sources in White Clay Creek, USA

    NASA Astrophysics Data System (ADS)

    Karwan, D. L.; Aufdenkampe, A. K.; Aalto, R. E.; Newbold, J. D.; Pizzuto, J. E.

    2011-12-01

    The material exported from a watershed reflects its origin and the processes it undergoes during downhill and downstream transport. Due to its nature as a complex mixture of material, the composition of POM integrates the physical, biological, and chemical processes effecting watershed material. In this study, we integrate sediment fingerprint analyses common in geomorphological studies of mineral suspended particulate material (SPM) with biological and ecological characterizations of particulate organic carbon (POC). Through this combination, we produce quantifiable budgets of particulate organic carbon and mineral material, as well as integrate our calculations of carbon and mineral cycling in a complex, human-influenced watershed. More specifically, we quantify the composition and sources of POM in the third-order White Clay Creek Watershed, and examine the differences in composition and source with hydrologic variations produced by storms and seasonality. POM and watershed sources have been analyzed for particle size, mineral surface area, total mineral elemental composition, fallout radioisotope activity for common erosion tracers (7Be, 210Pb, 137Cs), and organic carbon and nitrogen content with stable isotope (13C, 15N) abundance. Results indicate a difference in POM source with season as well as within individual storms. Beryllium-7 activity, an indicator of landscape surface erosion, nearly triples within a single spring storm, from 389 mBq/g on the rising limb and 1190 mBq/g at the storm hydrograph peak. Fall storms have even lower 7Be concentrations, below 100 mBq/g. Furthermore, weight-percent of organic carbon nearly doubles from 4 - 5% during spring storms to over 8% during fall storms, with smaller variation occurring within individual storms. Despite changes in percent organic carbon, organic carbon to mineral surface area ratios and carbon to nitrogen molar ratios remain similar within storms and across seasons.

  19. Synthesis of Amino Acid Precursors with Organic Solids in Planetesimals with Liquid Water

    NASA Technical Reports Server (NTRS)

    Kebukawa, Y; Misawa, S.; Matsukuma, J.; Chan, Q. H. S.; Kobayashi, J.; Tachibana, S.; Zolensky, M. E.

    2017-01-01

    Amino acids are important ingredients of life that would have been delivered to Earth by extraterrestrial sources, e.g., comets and meteorites. Amino acids are found in aqueously altered carbonaceous chondrites in good part in the form of precursors that release amino acids after acid hydrolysis. Meanwhile, most of the organic carbon (greater than 70 weight %) in carbonaceous chondrites exists in the form of solvent insoluble organic matter (IOM) with complex macromolecular structures. Complex macromolecular organic matter can be produced by either photolysis of interstellar ices or aqueous chemistry in planetesimals. We focused on the synthesis of amino acids during aqueous alteration, and demonstrated one-pot synthesis of a complex suite of amino acids simultaneously with IOM via hydrothermal experiments simulating the aqueous processing

  20. Nature and Nurture of Human Pain

    PubMed Central

    2013-01-01

    Humans are very different when it comes to pain. Some get painful piercings and tattoos; others can not stand even a flu shot. Interindividual variability is one of the main characteristics of human pain on every level including the processing of nociceptive impulses at the periphery, modification of pain signal in the central nervous system, perception of pain, and response to analgesic strategies. As for many other complex behaviors, the sources of this variability come from both nurture (environment) and nature (genes). Here, I will discuss how these factors contribute to human pain separately and via interplay and how epigenetic mechanisms add to the complexity of their effects. PMID:24278778

  1. a Prestellar Core 3MM Line Survey: Molecular Complexity in L183

    NASA Astrophysics Data System (ADS)

    Lattanzi, Valerio; Bizzocchi, Luca; Caselli, Paola

    2017-06-01

    Cold dark clouds represent a very unique environment to test our knowledge of the chemical and physical evolution of the structures that ultimately led to life. Starless cores, such as L183, are indeed the first phase of the star formation process and the nursery of chemical complexity. In this work we present the detection of several large astronomical molecules in the prestellar core L183, as a result of a 3mm single-pointing survey performed with the IRAM 30m antenna. The abundances of the observed species will be then compared to those found in similar environments, highlighting correspondences and uniquenesses of the different sources.

  2. Switches from pi- to sigma-bonding complexes controlled by gate voltages.

    PubMed

    Matsui, Eriko; Harnack, Oliver; Matsuzawa, Nobuyuki N; Yasuda, Akio

    2005-10-01

    A conjugated polymer/metal ion/liquid-crystal molecular system was set between source and drain electrodes with a 100 nm gap. When gate voltage (Vg) increases, the current between source and drain electrodes increases. Infrared spectra show this system to be composed of pi and sigma complexes. At Vg = 0, the pi complex dominates the sigma complex, whereas the sigma complex becomes dominant when Vg is switched on. Calculations found that the pi complex has lower conductivity than the sigma complex.

  3. Development of morphogen gradient: The role of dimension and discreteness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teimouri, Hamid; Kolomeisky, Anatoly B.

    2014-02-28

    The fundamental processes of biological development are governed by multiple signaling molecules that create non-uniform concentration profiles known as morphogen gradients. It is widely believed that the establishment of morphogen gradients is a result of complex processes that involve diffusion and degradation of locally produced signaling molecules. We developed a multi-dimensional discrete-state stochastic approach for investigating the corresponding reaction-diffusion models. It provided a full analytical description for stationary profiles and for important dynamic properties such as local accumulation times, variances, and mean first-passage times. The role of discreteness in developing of morphogen gradients is analyzed by comparing with available continuummore » descriptions. It is found that the continuum models prediction about multiple time scales near the source region in two-dimensional and three-dimensional systems is not supported in our analysis. Using ideas that view the degradation process as an effective potential, the effect of dimensionality on establishment of morphogen gradients is also discussed. In addition, we investigated how these reaction-diffusion processes are modified with changing the size of the source region.« less

  4. An efficient ASIC implementation of 16-channel on-line recursive ICA processor for real-time EEG system.

    PubMed

    Fang, Wai-Chi; Huang, Kuan-Ju; Chou, Chia-Ching; Chang, Jui-Chung; Cauwenberghs, Gert; Jung, Tzyy-Ping

    2014-01-01

    This is a proposal for an efficient very-large-scale integration (VLSI) design, 16-channel on-line recursive independent component analysis (ORICA) processor ASIC for real-time EEG system, implemented with TSMC 40 nm CMOS technology. ORICA is appropriate to be used in real-time EEG system to separate artifacts because of its highly efficient and real-time process features. The proposed ORICA processor is composed of an ORICA processing unit and a singular value decomposition (SVD) processing unit. Compared with previous work [1], this proposed ORICA processor has enhanced effectiveness and reduced hardware complexity by utilizing a deeper pipeline architecture, shared arithmetic processing unit, and shared registers. The 16-channel random signals which contain 8-channel super-Gaussian and 8-channel sub-Gaussian components are used to analyze the dependence of the source components, and the average correlation coefficient is 0.95452 between the original source signals and extracted ORICA signals. Finally, the proposed ORICA processor ASIC is implemented with TSMC 40 nm CMOS technology, and it consumes 15.72 mW at 100 MHz operating frequency.

  5. Overview of the NASA Wallops Flight Facility Mobile Range Control System

    NASA Technical Reports Server (NTRS)

    Davis, Rodney A.; Semancik, Susan K.; Smith, Donna C.; Stancil, Robert K.

    1999-01-01

    The NASA GSFC's Wallops Flight Facility (WFF) Mobile Range Control System (MRCS) is based on the functionality of the WFF Range Control Center at Wallops Island, Virginia. The MRCS provides real time instantaneous impact predictions, real time flight performance data, and other critical information needed by mission and range safety personnel in support of range operations at remote launch sites. The MRCS integrates a PC telemetry processing system (TELPro), a PC radar processing system (PCDQS), multiple Silicon Graphics display workstations (IRIS), and communication links within a mobile van for worldwide support of orbital, suborbital, and aircraft missions. This paper describes the MRCS configuration; the TELPro's capability to provide single/dual telemetry tracking and vehicle state data processing; the PCDQS' capability to provide real time positional data and instantaneous impact prediction for up to 8 data sources; and the IRIS' user interface for setup/display options. With portability, PC-based data processing, high resolution graphics, and flexible multiple source support, the MRCS system is proving to be responsive to the ever-changing needs of a variety of increasingly complex missions.

  6. Marine Emissions and Atmospheric Processing Influence Aerosol Mixing States in the Bering Strait and Chukchi Sea

    NASA Astrophysics Data System (ADS)

    Kirpes, R.; Rodriguez, B.; Kim, S.; Park, K.; China, S.; Laskin, A.; Pratt, K.

    2017-12-01

    The Arctic region is rapidly changing due to sea ice loss and increasing oil/gas development and shipping activity. These changes influence aerosol sources and composition, resulting in complex aerosol-cloud-climate feedbacks. Atmospheric particles were collected aboard the R/V Araon in July-August 2016 in the Alaskan Arctic along the Bering Strait and Chukchi Sea. Offline analysis of individual particles by microscopic and spectroscopic techniques provided information on particle size, morphology, and chemical composition. Sea spray aerosol (SSA) and organic aerosol (OA) particles were the most commonly observed particle types, and sulfate was internally mixed with both SSA and OA. Evidence of multiphase sea spray aerosol reactions was observed, with varying degrees of chlorine depletion observed along the cruise. Notably, atmospherically processed SSA, completely depleted in chlorine, and internally mixed organic and sulfate particles, were observed in samples influenced by the central Arctic Ocean. Changes in particle composition due to fog processing were also investigated. Due to the changing aerosol sources and atmospheric processes in the Arctic region, it is crucial to understand aerosol composition in order to predict climate impacts.

  7. Numerical modelling of biomass combustion: Solid conversion processes in a fixed bed furnace

    NASA Astrophysics Data System (ADS)

    Karim, Md. Rezwanul; Naser, Jamal

    2017-06-01

    Increasing demand for energy and rising concerns over global warming has urged the use of renewable energy sources to carry a sustainable development of the world. Bio mass is a renewable energy which has become an important fuel to produce thermal energy or electricity. It is an eco-friendly source of energy as it reduces carbon dioxide emissions. Combustion of solid biomass is a complex phenomenon due to its large varieties and physical structures. Among various systems, fixed bed combustion is the most commonly used technique for thermal conversion of solid biomass. But inadequate knowledge on complex solid conversion processes has limited the development of such combustion system. Numerical modelling of this combustion system has some advantages over experimental analysis. Many important system parameters (e.g. temperature, density, solid fraction) can be estimated inside the entire domain under different working conditions. In this work, a complete numerical model is used for solid conversion processes of biomass combustion in a fixed bed furnace. The combustion system is divided in to solid and gas phase. This model includes several sub models to characterize the solid phase of the combustion with several variables. User defined subroutines are used to introduce solid phase variables in commercial CFD code. Gas phase of combustion is resolved using built-in module of CFD code. Heat transfer model is modified to predict the temperature of solid and gas phases with special radiation heat transfer solution for considering the high absorptivity of the medium. Considering all solid conversion processes the solid phase variables are evaluated. Results obtained are discussed with reference from an experimental burner.

  8. Integrating technology into complex intervention trial processes: a case study.

    PubMed

    Drew, Cheney J G; Poile, Vincent; Trubey, Rob; Watson, Gareth; Kelson, Mark; Townson, Julia; Rosser, Anne; Hood, Kerenza; Quinn, Lori; Busse, Monica

    2016-11-17

    Trials of complex interventions are associated with high costs and burdens in terms of paperwork, management, data collection, validation, and intervention fidelity assessment occurring across multiple sites. Traditional data collection methods rely on paper-based forms, where processing can be time-consuming and error rates high. Electronic source data collection can potentially address many of these inefficiencies, but has not routinely been used in complex intervention trials. Here we present the use of an on-line system for managing all aspects of data handling and for the monitoring of trial processes in a multicentre trial of a complex intervention. We custom built a web-accessible software application for the delivery of ENGAGE-HD, a multicentre trial of a complex physical therapy intervention. The software incorporated functionality for participant randomisation, data collection and assessment of intervention fidelity. It was accessible to multiple users with differing levels of access depending on required usage or to maintain blinding. Each site was supplied with a 4G-enabled iPad for accessing the system. The impact of this system was quantified through review of data quality and collation of feedback from site coordinators and assessors through structured process interviews. The custom-built system was an efficient tool for collecting data and managing trial processes. Although the set-up time required was significant, using the system resulted in an overall data completion rate of 98.5% with a data query rate of 0.1%, the majority of which were resolved in under a week. Feedback from research staff indicated that the system was highly acceptable for use in a research environment. This was a reflection of the portability and accessibility of the system when using the iPad and its usefulness in aiding accurate data collection, intervention fidelity and general administration. A combination of commercially available hardware and a bespoke online database designed to support data collection, intervention fidelity and trial progress provides a viable option for streamlining trial processes in a multicentre complex intervention trial. There is scope to further extend the system to cater for larger trials and add further functionality such as automatic reporting facilities and participant management support. ISRCTN65378754 , registered on 13 March 2014.

  9. Childbirth in aristocratic households of Heian Japan.

    PubMed

    Andreeva, Anna

    2014-01-01

    This paper focuses on childbirth in Japan's aristocratic households during the Heian period (794-1185). Drawing on various sources, including court diaries, visual sources, literary records, and Japan's first medical collection, with its assortment of gynaecological and obstetric prescriptions, as well as Buddhist and other ritual texts, this short excursion into the cultural history of childbirth offers an insight into how childbirth was experienced and managed in Heian Japan. In particular, it addresses the variety of ideas, knowledge systems and professionals involved in framing and supporting the process of childbirth in elite households. In so doing, it casts light on the complex background of early Japanese medicine and healthcare for women.

  10. The Role of Bioreactors in Ligament and Tendon Tissue Engineering.

    PubMed

    Mace, James; Wheelton, Andy; Khan, Wasim S; Anand, Sanj

    2016-01-01

    Bioreactors are pivotal to the emerging field of tissue engineering. The formation of neotissue from pluripotent cell lineages potentially offers a source of tissue for clinical use without the significant donor site morbidity associated with many contemporary surgical reconstructive procedures. Modern bioreactor design is becoming increasingly complex to provide a both an expandable source of readily available pluripotent cells and to facilitate their controlled differentiation into a clinically applicable ligament or tendon like neotissue. This review presents the need for such a method, challenges in the processes to engineer neotissue and the current designs and results of modern bioreactors in the pursuit of engineered tendon and ligament.

  11. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  12. Interactive, open source, travel time scenario modelling: tools to facilitate participation in health service access analysis.

    PubMed

    Fisher, Rohan; Lassa, Jonatan

    2017-04-18

    Modelling travel time to services has become a common public health tool for planning service provision but the usefulness of these analyses is constrained by the availability of accurate input data and limitations inherent in the assumptions and parameterisation. This is particularly an issue in the developing world where access to basic data is limited and travel is often complex and multi-modal. Improving the accuracy and relevance in this context requires greater accessibility to, and flexibility in, travel time modelling tools to facilitate the incorporation of local knowledge and the rapid exploration of multiple travel scenarios. The aim of this work was to develop simple open source, adaptable, interactive travel time modelling tools to allow greater access to and participation in service access analysis. Described are three interconnected applications designed to reduce some of the barriers to the more wide-spread use of GIS analysis of service access and allow for complex spatial and temporal variations in service availability. These applications are an open source GIS tool-kit and two geo-simulation models. The development of these tools was guided by health service issues from a developing world context but they present a general approach to enabling greater access to and flexibility in health access modelling. The tools demonstrate a method that substantially simplifies the process for conducting travel time assessments and demonstrate a dynamic, interactive approach in an open source GIS format. In addition this paper provides examples from empirical experience where these tools have informed better policy and planning. Travel and health service access is complex and cannot be reduced to a few static modeled outputs. The approaches described in this paper use a unique set of tools to explore this complexity, promote discussion and build understanding with the goal of producing better planning outcomes. The accessible, flexible, interactive and responsive nature of the applications described has the potential to allow complex environmental social and political considerations to be incorporated and visualised. Through supporting evidence-based planning the innovative modelling practices described have the potential to help local health and emergency response planning in the developing world.

  13. Discrete Cu(i) complexes for azide–alkyne annulations of small molecules inside mammalian cells† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c7sc04643j

    PubMed Central

    Miguel-Ávila, Joan; Tomás-Gamasa, María; Olmos, Andrea

    2018-01-01

    The archetype reaction of “click” chemistry, namely, the copper-promoted azide–alkyne cycloaddition (CuAAC), has found an impressive number of applications in biological chemistry. However, methods for promoting intermolecular annulations of exogenous, small azides and alkynes in the complex interior of mammalian cells, are essentially unknown. Herein we demonstrate that isolated, well-defined copper(i)–tris(triazolyl) complexes featuring designed ligands can readily enter mammalian cells and promote intracellular CuAAC annulations of small, freely diffusible molecules. In addition to simplifying protocols and avoiding the addition of “non-innocent” reductants, the use of these premade copper complexes leads to more efficient processes than with the alternative, in situ made copper species prepared from Cu(ii) sources, tris(triazole) ligands and sodium ascorbate. Under the reaction conditions, the well-defined copper complexes exhibit very good cell penetration properties, and do not present significant toxicities. PMID:29675241

  14. Fingerprinting selection for agroenvironmental catchment studies: EDXRF analysis for solving complex artificial mixtures

    NASA Astrophysics Data System (ADS)

    Torres Astorga, Romina; Velasco, Hugo; Dercon, Gerd; Mabit, Lionel

    2017-04-01

    Soil erosion and associated sediment transportation and deposition processes are key environmental problems in Central Argentinian watersheds. Several land use practices - such as intensive grazing and crop cultivation - are considered likely to increase significantly land degradation and soil/sediment erosion processes. Characterized by highly erodible soils, the sub catchment Estancia Grande (12.3 km2) located 23 km north east of San Luis has been investigated by using sediment source fingerprinting techniques to identify critical hot spots of land degradation. The authors created 4 artificial mixtures using known quantities of the most representative sediment sources of the studied catchment. The first mixture was made using four rotation crop soil sources. The second and the third mixture were created using different proportions of 4 different soil sources including soils from a feedlot, a rotation crop, a walnut forest and a grazing soil. The last tested mixture contained the same sources as the third mixture but with the addition of a fifth soil source (i.e. a native bank soil). The Energy Dispersive X Ray Fluorescence (EDXRF) analytical technique has been used to reconstruct the source sediment proportion of the original mixtures. Besides using a traditional method of fingerprint selection such as Kruskal-Wallis H-test and Discriminant Function Analysis (DFA), the authors used the actual source proportions in the mixtures and selected from the subset of tracers that passed the statistical tests specific elemental tracers that were in agreement with the expected mixture contents. The selection process ended with testing in a mixing model all possible combinations of the reduced number of tracers obtained. Alkaline earth metals especially Strontium (Sr) and Barium (Ba) were identified as the most effective fingerprints and provided a reduced Mean Absolute Error (MAE) of approximately 2% when reconstructing the 4 artificial mixtures. This study demonstrates that the EDXRF fingerprinting approach performed very well in reconstructing our original mixtures especially in identifying and quantifying the contribution of the 4 rotation crop soil sources in the first mixture.

  15. Characterization and quantification of suspended sediment sources to the Manawatu River, New Zealand.

    PubMed

    Vale, S S; Fuller, I C; Procter, J N; Basher, L R; Smith, I E

    2016-02-01

    Knowledge of sediment movement throughout a catchment environment is essential due to its influence on the character and form of our landscape relating to agricultural productivity and ecological health. Sediment fingerprinting is a well-used tool for evaluating sediment sources within a fluvial catchment but still faces areas of uncertainty for applications to large catchments that have a complex arrangement of sources. Sediment fingerprinting was applied to the Manawatu River Catchment to differentiate 8 geological and geomorphological sources. The source categories were Mudstone, Hill Subsurface, Hill Surface, Channel Bank, Mountain Range, Gravel Terrace, Loess and Limestone. Geochemical analysis was conducted using XRF and LA-ICP-MS. Geochemical concentrations were analysed using Discriminant Function Analysis and sediment un-mixing models. Two mixing models were used in conjunction with GRG non-linear and Evolutionary optimization methods for comparison. Discriminant Function Analysis required 16 variables to correctly classify 92.6% of sediment sources. Geological explanations were achieved for some of the variables selected, although there is a need for mineralogical information to confirm causes for the geochemical signatures. Consistent source estimates were achieved between models with optimization techniques providing globally optimal solutions for sediment quantification. Sediment sources was attributed primarily to Mudstone, ≈38-46%; followed by the Mountain Range, ≈15-18%; Hill Surface, ≈12-16%; Hill Subsurface, ≈9-11%; Loess, ≈9-15%; Gravel Terrace, ≈0-4%; Channel Bank, ≈0-5%; and Limestone, ≈0%. Sediment source apportionment fits with the conceptual understanding of the catchment which has recognized soft sedimentary mudstone to be highly susceptible to erosion. Inference of the processes responsible for sediment generation can be made for processes where there is a clear relationship with the geomorphology, but is problematic for processes which occur within multiple terrains. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. An Open-Source Sandbox for Increasing the Accessibility of Functional Programming to the Bioinformatics and Scientific Communities

    PubMed Central

    Fenwick, Matthew; Sesanker, Colbert; Schiller, Martin R.; Ellis, Heidi JC; Hinman, M. Lee; Vyas, Jay; Gryk, Michael R.

    2012-01-01

    Scientists are continually faced with the need to express complex mathematical notions in code. The renaissance of functional languages such as LISP and Haskell is often credited to their ability to implement complex data operations and mathematical constructs in an expressive and natural idiom. The slow adoption of functional computing in the scientific community does not, however, reflect the congeniality of these fields. Unfortunately, the learning curve for adoption of functional programming techniques is steeper than that for more traditional languages in the scientific community, such as Python and Java, and this is partially due to the relative sparseness of available learning resources. To fill this gap, we demonstrate and provide applied, scientifically substantial examples of functional programming, We present a multi-language source-code repository for software integration and algorithm development, which generally focuses on the fields of machine learning, data processing, bioinformatics. We encourage scientists who are interested in learning the basics of functional programming to adopt, reuse, and learn from these examples. The source code is available at: https://github.com/CONNJUR/CONNJUR-Sandbox (see also http://www.connjur.org). PMID:25328913

  17. pyOpenMS: a Python-based interface to the OpenMS mass-spectrometry algorithm library.

    PubMed

    Röst, Hannes L; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars

    2014-01-01

    pyOpenMS is an open-source, Python-based interface to the C++ OpenMS library, providing facile access to a feature-rich, open-source algorithm library for MS-based proteomics analysis. It contains Python bindings that allow raw access to the data structures and algorithms implemented in OpenMS, specifically those for file access (mzXML, mzML, TraML, mzIdentML among others), basic signal processing (smoothing, filtering, de-isotoping, and peak-picking) and complex data analysis (including label-free, SILAC, iTRAQ, and SWATH analysis tools). pyOpenMS thus allows fast prototyping and efficient workflow development in a fully interactive manner (using the interactive Python interpreter) and is also ideally suited for researchers not proficient in C++. In addition, our code to wrap a complex C++ library is completely open-source, allowing other projects to create similar bindings with ease. The pyOpenMS framework is freely available at https://pypi.python.org/pypi/pyopenms while the autowrap tool to create Cython code automatically is available at https://pypi.python.org/pypi/autowrap (both released under the 3-clause BSD licence). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. An Open-Source Sandbox for Increasing the Accessibility of Functional Programming to the Bioinformatics and Scientific Communities.

    PubMed

    Fenwick, Matthew; Sesanker, Colbert; Schiller, Martin R; Ellis, Heidi Jc; Hinman, M Lee; Vyas, Jay; Gryk, Michael R

    2012-01-01

    Scientists are continually faced with the need to express complex mathematical notions in code. The renaissance of functional languages such as LISP and Haskell is often credited to their ability to implement complex data operations and mathematical constructs in an expressive and natural idiom. The slow adoption of functional computing in the scientific community does not, however, reflect the congeniality of these fields. Unfortunately, the learning curve for adoption of functional programming techniques is steeper than that for more traditional languages in the scientific community, such as Python and Java, and this is partially due to the relative sparseness of available learning resources. To fill this gap, we demonstrate and provide applied, scientifically substantial examples of functional programming, We present a multi-language source-code repository for software integration and algorithm development, which generally focuses on the fields of machine learning, data processing, bioinformatics. We encourage scientists who are interested in learning the basics of functional programming to adopt, reuse, and learn from these examples. The source code is available at: https://github.com/CONNJUR/CONNJUR-Sandbox (see also http://www.connjur.org).

  19. Configurations of high-frequency ultrasonics complex vibration systems for packaging in microelectronics.

    PubMed

    Tsujino, Jiromaru; Harada, Yoshiki; Ihara, Shigeru; Kasahara, Kohei; Shimizu, Masanori; Ueoka, Tetsugi

    2004-04-01

    Ultrasonic high-frequency complex vibrations are effective for various ultrasonic high-power applications. Three types of ultrasonic complex vibration system with a welding tip vibrating elliptical to circular locus for packaging in microelectronics were studied. The complex vibration sources are using (1) a longitudinal-torsional vibration converter with diagonal slits that is driven only by a longitudinal vibration source, (2) a complex transverse vibration rod with several stepped parts that is driven by two longitudinal vibration source crossed at a right angle and (3) a longitudinal vibration circular disk and three longitudinal transducers that are installed at the circumference of the disk.

  20. STAR FORMATION ACROSS THE W3 COMPLEX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Román-Zúñiga, Carlos G.; Ybarra, Jason E.; Tapia, Mauricio

    We present a multi-wavelength analysis of the history of star formation in the W3 complex. Using deep, near-infrared ground-based images combined with images obtained with Spitzer and Chandra observatories, we identified and classified young embedded sources. We identified the principal clusters in the complex and determined their structure and extension. We constructed extinction-limited samples for five principal clusters and constructed K-band luminosity functions that we compare with those of artificial clusters with varying ages. This analysis provided mean ages and possible age spreads for the clusters. We found that IC 1795, the centermost cluster of the complex, still hosts amore » large fraction of young sources with circumstellar disks. This indicates that star formation was active in IC 1795 as recently as 2 Myr ago, simultaneous to the star-forming activity in the flanking embedded clusters, W3-Main and W3(OH). A comparison with carbon monoxide emission maps indicates strong velocity gradients in the gas clumps hosting W3-Main and W3(OH) and shows small receding clumps of gas at IC 1795, suggestive of rapid gas removal (faster than the T Tauri timescale) in the cluster-forming regions. We discuss one possible scenario for the progression of cluster formation in the W3 complex. We propose that early processes of gas collapse in the main structure of the complex could have defined the progression of cluster formation across the complex with relatively small age differences from one group to another. However, triggering effects could act as catalysts for enhanced efficiency of formation at a local level, in agreement with previous studies.« less

  1. A dynamic processes study of PM retention by trees under different wind conditions.

    PubMed

    Xie, Changkun; Kan, Liyan; Guo, Jiankang; Jin, Sijia; Li, Zhigang; Chen, Dan; Li, Xin; Che, Shengquan

    2018-02-01

    Particulate matter (PM) is one of the most serious environmental problems, exacerbating respiratory and vascular illnesses. Plants have the ability to reduce non-point source PM pollution through retention on leaves and branches. Studies of the dynamic processes of PM retention by plants and the mechanisms influencing this process will help to improve the efficiency of urban greening for PM reduction. We examined dynamic processes of PM retention and the major factors influencing PM retention by six trees with different branch structure characteristics in wind tunnel experiments at three different wind speeds. The results showed that the changes of PM numbers retained by plant leaves over time were complex dynamic processes for which maximum values could exceed minimum values by over 10 times. The average value of PM measured in multiple periods and situations can be considered a reliable indicator of the ability of the plant to retain PM. The dynamic processes were similar for PM 10 and PM 2.5 . They could be clustered into three groups simulated by continually-rising, inverse U-shaped, and U-shaped polynomial functions, respectively. The processes were the synthetic effect of characteristics such as species, wind speed, period of exposure and their interactions. Continually-rising functions always explained PM retention in species with extremely complex branch structure. Inverse U-shaped processes explained PM retention in species with relatively simple branch structure and gentle wind. The U-shaped processes mainly explained PM retention at high wind speeds and in species with a relatively simple crown. These results indicate that using plants with complex crowns in urban greening and decreasing wind speed in plant communities increases the chance of continually-rising or inverse U-shaped relationships, which have a positive effect in reducing PM pollution. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Cognitive Complexity of Mathematics Instructional Tasks in a Taiwanese Classroom: An Examination of Task Sources

    ERIC Educational Resources Information Center

    Hsu, Hui-Yu; Silver, Edward A.

    2014-01-01

    We examined geometric calculation with number tasks used within a unit of geometry instruction in a Taiwanese classroom, identifying the source of each task used in classroom instruction and analyzing the cognitive complexity of each task with respect to 2 distinct features: diagram complexity and problem-solving complexity. We found that…

  3. Techniques and instrumentation for the measurement of transient sound energy flux

    NASA Astrophysics Data System (ADS)

    Watkinson, P. S.; Fahy, F. J.

    1983-12-01

    The evaluation of sound intensity distributions, and sound powers, of essentially continuous sources such as automotive engines, electric motors, production line machinery, furnaces, earth moving machinery and various types of process plants were studied. Although such systems are important sources of community disturbance and, to a lesser extent, of industrial health hazard, the most serious sources of hearing hazard in industry are machines operating on an impact principle, such as drop forges, hammers and punches. Controlled experiments to identify major noise source regions and mechanisms are difficult because it is normally impossible to install them in quiet, anechoic environments. The potential for sound intensity measurement to provide a means of overcoming these difficulties has given promising results, indicating the possibility of separation of directly radiated and reverberant sound fields. However, because of the complexity of transient sound fields, a fundamental investigation is necessary to establish the practicability of intensity field decomposition, which is basic to source characterization techniques.

  4. OpenFLUID: an open-source software environment for modelling fluxes in landscapes

    NASA Astrophysics Data System (ADS)

    Fabre, Jean-Christophe; Rabotin, Michaël; Crevoisier, David; Libres, Aline; Dagès, Cécile; Moussa, Roger; Lagacherie, Philippe; Raclot, Damien; Voltz, Marc

    2013-04-01

    Integrative landscape functioning has become a common concept in environmental management. Landscapes are complex systems where many processes interact in time and space. In agro-ecosystems, these processes are mainly physical processes, including hydrological-processes, biological processes and human activities. Modelling such systems requires an interdisciplinary approach, coupling models coming from different disciplines, developed by different teams. In order to support collaborative works, involving many models coupled in time and space for integrative simulations, an open software modelling platform is a relevant answer. OpenFLUID is an open source software platform for modelling landscape functioning, mainly focused on spatial fluxes. It provides an advanced object-oriented architecture allowing to i) couple models developed de novo or from existing source code, and which are dynamically plugged to the platform, ii) represent landscapes as hierarchical graphs, taking into account multi-scale, spatial heterogeneities and landscape objects connectivity, iii) run and explore simulations in many ways : using the OpenFLUID software interfaces for users (command line interface, graphical user interface), or using external applications such as GNU R through the provided ROpenFLUID package. OpenFLUID is developed in C++ and relies on open source libraries only (Boost, libXML2, GLib/GTK, OGR/GDAL, …). For modelers and developers, OpenFLUID provides a dedicated environment for model development, which is based on an open source toolchain, including the Eclipse editor, the GCC compiler and the CMake build system. OpenFLUID is distributed under the GPLv3 open source license, with a special exception allowing to plug existing models licensed under any license. It is clearly in the spirit of sharing knowledge and favouring collaboration in a community of modelers. OpenFLUID has been involved in many research applications, such as modelling of hydrological network transfer, diagnosis and prediction of water quality taking into account human activities, study of the effect of spatial organization on hydrological fluxes, modelling of surface-subsurface water exchanges, … At LISAH research unit, OpenFLUID is the supporting development platform of the MHYDAS model, which is a distributed model for agrosystems (Moussa et al., 2002, Hydrological Processes, 16, 393-412). OpenFLUID web site : http://www.openfluid-project.org

  5. Multiscale analysis of potential fields by a ridge consistency criterion: the reconstruction of the Bishop basement

    NASA Astrophysics Data System (ADS)

    Fedi, M.; Florio, G.; Cascone, L.

    2012-01-01

    We use a multiscale approach as a semi-automated interpreting tool of potential fields. The depth to the source and the structural index are estimated in two steps: first the depth to the source, as the intersection of the field ridges (lines built joining the extrema of the field at various altitudes) and secondly, the structural index by the scale function. We introduce a new criterion, called 'ridge consistency' in this strategy. The criterion is based on the principle that the structural index estimations on all the ridges converging towards the same source should be consistent. If these estimates are significantly different, field differentiation is used to lessen the interference effects from nearby sources or regional fields, to obtain a consistent set of estimates. In our multiscale framework, vertical differentiation is naturally joint to the low-pass filtering properties of the upward continuation, so is a stable process. Before applying our criterion, we studied carefully the errors on upward continuation caused by the finite size of the survey area. To this end, we analysed the complex magnetic synthetic case, known as Bishop model, and evaluated the best extrapolation algorithm and the optimal width of the area extension, needed to obtain accurate upward continuation. Afterwards, we applied the method to the depth estimation of the whole Bishop basement bathymetry. The result is a good reconstruction of the complex basement and of the shape properties of the source at the estimated points.

  6. Mr-Moose: An advanced SED-fitting tool for heterogeneous multi-wavelength datasets

    NASA Astrophysics Data System (ADS)

    Drouart, G.; Falkendal, T.

    2018-04-01

    We present the public release of Mr-Moose, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from an heterogeneous dataset (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, Mr-Moose handles upper-limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly-versatile fitting tool fro handling increasing source complexity when combining multi-wavelength datasets with fully customisable filter/model databases. The complete control of the user is one advantage, which avoids the traditional problems related to the "black box" effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of Python and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially-generated datasets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA and VLA data) in the context of extragalactic SED fitting, makes Mr-Moose a particularly-attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.

  7. MR-MOOSE: an advanced SED-fitting tool for heterogeneous multi-wavelength data sets

    NASA Astrophysics Data System (ADS)

    Drouart, G.; Falkendal, T.

    2018-07-01

    We present the public release of MR-MOOSE, a fitting procedure that is able to perform multi-wavelength and multi-object spectral energy distribution (SED) fitting in a Bayesian framework. This procedure is able to handle a large variety of cases, from an isolated source to blended multi-component sources from a heterogeneous data set (i.e. a range of observation sensitivities and spectral/spatial resolutions). Furthermore, MR-MOOSE handles upper limits during the fitting process in a continuous way allowing models to be gradually less probable as upper limits are approached. The aim is to propose a simple-to-use, yet highly versatile fitting tool for handling increasing source complexity when combining multi-wavelength data sets with fully customisable filter/model data bases. The complete control of the user is one advantage, which avoids the traditional problems related to the `black box' effect, where parameter or model tunings are impossible and can lead to overfitting and/or over-interpretation of the results. Also, while a basic knowledge of PYTHON and statistics is required, the code aims to be sufficiently user-friendly for non-experts. We demonstrate the procedure on three cases: two artificially generated data sets and a previous result from the literature. In particular, the most complex case (inspired by a real source, combining Herschel, ALMA, and VLA data) in the context of extragalactic SED fitting makes MR-MOOSE a particularly attractive SED fitting tool when dealing with partially blended sources, without the need for data deconvolution.

  8. Safe Operations of Unmanned Systems for Reconnaissance in Complex Environments Army Technology Objective (SOURCE ATO)

    DTIC Science & Technology

    2011-04-25

    must adapt its planning to vehicle size, shape, wheelbase, wheel and axle configuration, the specific obstacle-crossing capabilities of the vehicle...scalability of the ANS is a consequence of making each sensing modality capable of performing reasonable perception tasks while allowing a wider...autonomous system design achieves flexibility by exploiting redundant sensing modalities where possible, and by a decision-making process that

  9. Facing climate change in forests and fields: U.S. Forest Service taps into science-management partnerships

    Treesearch

    Amy Daniels; Nancy Shaw; Dave Peterson; Keith Nislow; Monica Tomosy; Mary Rowland

    2014-01-01

    As a growing body of science shows, climate change impacts on wildlife are already profound—from shifting species’ ranges and altering the synchronicity of food sources to changing the availability of water. Such impacts are only expected to increase in the coming decades. As climate change shapes complex, interwoven ecological processes, novel conditions and...

  10. The "I" of the Beholder: A Guided Journey to the Essence of a Child

    ERIC Educational Resources Information Center

    Roeper, Annemarie

    2007-01-01

    In this book, the author describes the complexity of the Self as the source of all human behavior. She will try to outline the structure of the Self, its normal growth and development, and the role of interaction with other living things in this process. Ms. Roeper sees the Self as a unit within us, which includes input from the brain and all…

  11. Trace detection of organic compounds in complex sample matrixes by single photon ionization ion trap mass spectrometry: real-time detection of security-relevant compounds and online analysis of the coffee-roasting process.

    PubMed

    Schramm, Elisabeth; Kürten, Andreas; Hölzer, Jasper; Mitschke, Stefan; Mühlberger, Fabian; Sklorz, Martin; Wieser, Jochen; Ulrich, Andreas; Pütz, Michael; Schulte-Ladbeck, Rasmus; Schultze, Rainer; Curtius, Joachim; Borrmann, Stephan; Zimmermann, Ralf

    2009-06-01

    An in-house-built ion trap mass spectrometer combined with a soft ionization source has been set up and tested. As ionization source, an electron beam pumped vacuum UV (VUV) excimer lamp (EBEL) was used for single-photon ionization. It was shown that soft ionization allows the reduction of fragmentation of the target analytes and the suppression of most matrix components. Therefore, the combination of photon ionization with the tandem mass spectrometry (MS/MS) capability of an ion trap yields a powerful tool for molecular ion peak detection and identification of organic trace compounds in complex matrixes. This setup was successfully tested for two different applications. The first one is the detection of security-relevant substances like explosives, narcotics, and chemical warfare agents. One test substance from each of these groups was chosen and detected successfully with single photon ionization ion trap mass spectrometry (SPI-ITMS) MS/MS measurements. Additionally, first tests were performed, demonstrating that this method is not influenced by matrix compounds. The second field of application is the detection of process gases. Here, exhaust gas from coffee roasting was analyzed in real time, and some of its compounds were identified using MS/MS studies.

  12. Lead concentration and isotope chronology in two coastal environments in Western and South East Asia

    NASA Astrophysics Data System (ADS)

    Carrasco, G. G.; Chen, M.; Boyle, E. A.; Zhao, N.; Nurhati, I. S.; Gevao, B.; al Ghadban, A.; Switzer, A.; Lee, J. M.

    2014-12-01

    Lead is a trace metal that is closely related to anthropogenic activity, mainly via leaded gasoline and coal combustion. The study of lead concentrations and isotopes in seawater, sediments, corals and aerosols allows for a systematic look at its sources and their time evolution in a natural environment. We will discuss results from two projects in Western and South East Asia, regions that have seen dramatic socio-economical changes over the past half-century that may have left environmental signals. These results highlight the usefulness of the method, indicate the degree of complexity of these systems, and point to the need for a continuous monitoring of anthropogenic trace metals in the small-medium coastal scale to be able to asses the larger scale effects of human activity. On the one hand, coastal Kuwait is heavily influenced by the Shat al-Arab river and shows a clear anthropogenic signature from Kuwait city. A mix of two sources can be tracked through the coral and sediment chronological records, with Pb206/Pb207 ratios (1.202 and 1.151) that approach the suspected source values (1.21 and 1.12) and eliminate the possibility of other sources. Through a wide sediment geographic distribution, the strength of the anthropogenic signature is modulated. On the other hand, Singapore offers a more complex system, where an apparent mix of two sources (extreme isotope ratios 1.215 and ~1.14) occurs also, but where either an unresolved potentially important third source (isotope ratio ~1.18), or an isotope exchange process should be invoked. The sediment and coral records allows us to track the changes through time; however, there seems to be incongruence with the aerosol isotope record. Further potential sources are being explored currently and will be discussed.

  13. Nestly--a framework for running software with nested parameter choices and aggregating results.

    PubMed

    McCoy, Connor O; Gallagher, Aaron; Hoffman, Noah G; Matsen, Frederick A

    2013-02-01

    The execution of a software application or pipeline using various combinations of parameters and inputs is a common task in bioinformatics. In the absence of a specialized tool to organize, streamline and formalize this process, scientists must write frequently complex scripts to perform these tasks. We present nestly, a Python package to facilitate running tools with nested combinations of parameters and inputs. nestly provides three components. First, a module to build nested directory structures corresponding to choices of parameters. Second, the nestrun script to run a given command using each set of parameter choices. Third, the nestagg script to aggregate results of the individual runs into a CSV file, as well as support for more complex aggregation. We also include a module for easily specifying nested dependencies for the SCons build tool, enabling incremental builds. Source, documentation and tutorial examples are available at http://github.com/fhcrc/nestly. nestly can be installed from the Python Package Index via pip; it is open source (MIT license).

  14. The US regulatory and pharmacopeia response to the global heparin contamination crisis.

    PubMed

    Szajek, Anita Y; Chess, Edward; Johansen, Kristian; Gratzl, Gyöngyi; Gray, Elaine; Keire, David; Linhardt, Robert J; Liu, Jian; Morris, Tina; Mulloy, Barbara; Nasr, Moheb; Shriver, Zachary; Torralba, Pearle; Viskov, Christian; Williams, Roger; Woodcock, Janet; Workman, Wesley; Al-Hakim, Ali

    2016-06-09

    The contamination of the widely used lifesaving anticoagulant drug heparin in 2007 has drawn renewed attention to the challenges that are associated with the characterization, quality control and standardization of complex biological medicines from natural sources. Heparin is a linear, highly sulfated polysaccharide consisting of alternating glucosamine and uronic acid monosaccharide residues. Heparin has been used successfully as an injectable antithrombotic medicine since the 1930s, and its isolation from animal sources (primarily porcine intestine) as well as its manufacturing processes have not changed substantially since its introduction. The 2007 heparin contamination crisis resulted in several deaths in the United States and hundreds of adverse reactions worldwide, revealing the vulnerability of a complex global supply chain to sophisticated adulteration. This Perspective discusses how the US Food and Drug Administration (FDA), the United States Pharmacopeial Convention (USP) and international stakeholders collaborated to redefine quality expectations for heparin, thus making an important natural product better controlled and less susceptible to economically motivated adulteration.

  15. An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.

    PubMed

    Chen, Lei; Wei, Guoyuan; Shen, Zhenyao

    2015-10-21

    To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.

  16. Petrogenesis and tectonics of the Acasta Gneiss Complex derived from integrated petrology and 142Nd and 182W extinct nuclide-geochemistry

    NASA Astrophysics Data System (ADS)

    Reimink, Jesse R.; Chacko, Thomas; Carlson, Richard W.; Shirey, Steven B.; Liu, Jingao; Stern, Richard A.; Bauer, Ann M.; Pearson, D. Graham; Heaman, Larry M.

    2018-07-01

    The timing and mechanisms of continental crust formation represent major outstanding questions in the Earth sciences. Extinct-nuclide radioactive systems offer the potential to evaluate the temporal relations of a variety of differentiation processes on the early Earth, including crust formation. Here, we investigate the whole-rock 182W/184W and 142Nd/144Nd ratios and zircon Δ17O values of a suite of well-studied and lithologically-homogeneous meta-igneous rocks from the Acasta Gneiss Complex, Northwest Territories, Canada, including the oldest-known zircon-bearing rocks on Earth. In the context of previously published geochemical data and petrogenetic models, the new 142Nd/144Nd data indicate that formation of the Hadean-Eoarchean Acasta crust was ultimately derived from variable sources, both in age and composition. Although 4.02 Ga crust was extracted from a nearly bulk-Earth source, heterogeneous μ142Nd signatures indicate that Eoarchean rocks of the Acasta Gneiss Complex were formed by partial melting of hydrated, Hadean-age mafic crust at depths shallower than the garnet stability field. By ∼3.6 Ga, granodioritic-granitic rocks were formed by partial melting of Archean hydrated mafic crust that was melted at greater depth, well into the garnet stability field. Our 182W results indicate that the sources to the Acasta Gneiss Complex had homogeneous, high-μ182W on the order of +10 ppm-a signature ubiquitous in other Eoarchean terranes. No significant deviation from the terrestrial mass fractionation line was found in the triple oxygen isotope (16O-17O-18O) compositions of Acasta zircons, confirming homogeneous oxygen isotope compositions in Earth's mantle by 4.02 Ga.

  17. High-resolution Single Particle Analysis from Electron Cryo-microscopy Images Using SPHIRE

    PubMed Central

    Moriya, Toshio; Saur, Michael; Stabrin, Markus; Merino, Felipe; Voicu, Horatiu; Huang, Zhong; Penczek, Pawel A.; Raunser, Stefan; Gatsogiannis, Christos

    2017-01-01

    SPHIRE (SPARX for High-Resolution Electron Microscopy) is a novel open-source, user-friendly software suite for the semi-automated processing of single particle electron cryo-microscopy (cryo-EM) data. The protocol presented here describes in detail how to obtain a near-atomic resolution structure starting from cryo-EM micrograph movies by guiding users through all steps of the single particle structure determination pipeline. These steps are controlled from the new SPHIRE graphical user interface and require minimum user intervention. Using this protocol, a 3.5 Å structure of TcdA1, a Tc toxin complex from Photorhabdus luminescens, was derived from only 9500 single particles. This streamlined approach will help novice users without extensive processing experience and a priori structural information, to obtain noise-free and unbiased atomic models of their purified macromolecular complexes in their native state. PMID:28570515

  18. Three dimensions of the amyloid hypothesis: time, space and 'wingmen'.

    PubMed

    Musiek, Erik S; Holtzman, David M

    2015-06-01

    The amyloid hypothesis, which has been the predominant framework for research in Alzheimer's disease (AD), has been the source of considerable controversy. The amyloid hypothesis postulates that amyloid-β peptide (Aβ) is the causative agent in AD. It is strongly supported by data from rare autosomal dominant forms of AD. However, the evidence that Aβ causes or contributes to age-associated sporadic AD is more complex and less clear, prompting criticism of the hypothesis. We provide an overview of the major arguments for and against the amyloid hypothesis. We conclude that Aβ likely is the key initiator of a complex pathogenic cascade that causes AD. However, we argue that Aβ acts primarily as a trigger of other downstream processes, particularly tau aggregation, which mediate neurodegeneration. Aβ appears to be necessary, but not sufficient, to cause AD. Its major pathogenic effects may occur very early in the disease process.

  19. Traceability of On-Machine Tool Measurement: A Review.

    PubMed

    Mutilba, Unai; Gomez-Acedo, Eneko; Kortaberria, Gorka; Olarra, Aitor; Yagüe-Fabra, Jose A

    2017-07-11

    Nowadays, errors during the manufacturing process of high value components are not acceptable in driving industries such as energy and transportation. Sectors such as aerospace, automotive, shipbuilding, nuclear power, large science facilities or wind power need complex and accurate components that demand close measurements and fast feedback into their manufacturing processes. New measuring technologies are already available in machine tools, including integrated touch probes and fast interface capabilities. They provide the possibility to measure the workpiece in-machine during or after its manufacture, maintaining the original setup of the workpiece and avoiding the manufacturing process from being interrupted to transport the workpiece to a measuring position. However, the traceability of the measurement process on a machine tool is not ensured yet and measurement data is still not fully reliable enough for process control or product validation. The scientific objective is to determine the uncertainty on a machine tool measurement and, therefore, convert it into a machine integrated traceable measuring process. For that purpose, an error budget should consider error sources such as the machine tools, components under measurement and the interactions between both of them. This paper reviews all those uncertainty sources, being mainly focused on those related to the machine tool, either on the process of geometric error assessment of the machine or on the technology employed to probe the measurand.

  20. Barista: A Framework for Concurrent Speech Processing by USC-SAIL

    PubMed Central

    Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G.; Narayanan, Shrikanth S.

    2016-01-01

    We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0. PMID:27610047

  1. Barista: A Framework for Concurrent Speech Processing by USC-SAIL.

    PubMed

    Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G; Narayanan, Shrikanth S

    2014-05-01

    We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0.

  2. MEG evidence that the central auditory system simultaneously encodes multiple temporal cues.

    PubMed

    Simpson, Michael I G; Barnes, Gareth R; Johnson, Sam R; Hillebrand, Arjan; Singh, Krish D; Green, Gary G R

    2009-09-01

    Speech contains complex amplitude modulations that have envelopes with multiple temporal cues. The processing of these complex envelopes is not well explained by the classical models of amplitude modulation processing. This may be because the evidence for the models typically comes from the use of simple sinusoidal amplitude modulations. In this study we used magnetoencephalography (MEG) to generate source space current estimates of the steady-state responses to simple one-component amplitude modulations and to a two-component amplitude modulation. A two-component modulation introduces the simplest form of modulation complexity into the waveform; the summation of the two-modulation rates introduces a beat-like modulation at the difference frequency between the two modulation rates. We compared the cortical representations of responses to the one-component and two-component modulations. In particular, we show that the temporal complexity in the two-component amplitude modulation stimuli was preserved at the cortical level. The method of stimulus normalization that we used also allows us to interpret these results as evidence that the important feature in sound modulations is the relative depth of one modulation rate with respect to another, rather than the absolute carrier-to-sideband modulation depth. More generally, this may be interpreted as evidence that modulation detection accurately preserves a representation of the modulation envelope. This is an important observation with respect to models of modulation processing, as it suggests that models may need a dynamic processing step to effectively model non-stationary stimuli. We suggest that the classic modulation filterbank model needs to be modified to take these findings into account.

  3. Reconceptualizing children's complex discharge with health systems theory: novel integrative review with embedded expert consultation and theory development.

    PubMed

    Noyes, Jane; Brenner, Maria; Fox, Patricia; Guerin, Ashleigh

    2014-05-01

    To report a novel review to develop a health systems model of successful transition of children with complex healthcare needs from hospital to home. Children with complex healthcare needs commonly experience an expensive, ineffectual and prolonged nurse-led discharge process. Children gain no benefit from prolonged hospitalization and are exposed to significant harm. Research to enable intervention development and process evaluation across the entire health system is lacking. Novel mixed-method integrative review informed by health systems theory. DATA  CINAHL, PsychInfo, EMBASE, PubMed, citation searching, personal contact. REVIEW  Informed by consultation with experts. English language studies, opinion/discussion papers reporting research, best practice and experiences of children, parents and healthcare professionals and purposively selected policies/guidelines from 2002-December 2012 were abstracted using Framework synthesis, followed by iterative theory development. Seven critical factors derived from thirty-four sources across five health system levels explained successful discharge (new programme theory). All seven factors are required in an integrated care pathway, with a dynamic communication loop to facilitate effective discharge (new programme logic). Current health system responses were frequently static and critical success factors were commonly absent, thereby explaining ineffectual discharge. The novel evidence-based model, which reconceptualizes 'discharge' as a highly complex longitudinal health system intervention, makes a significant contribution to global knowledge to drive practice development. Research is required to develop process and outcome measures at different time points in the discharge process and future trials are needed to determine the effectiveness of integrated health system discharge models. © 2013 John Wiley & Sons Ltd.

  4. Combination of electrospray ionization, atmospheric pressure photoionization and laser desorption ionization Fourier transform ion cyclotronic resonance mass spectrometry for the investigation of complex mixtures - Application to the petroleomic analysis of bio-oils.

    PubMed

    Hertzog, Jasmine; Carré, Vincent; Le Brech, Yann; Mackay, Colin Logan; Dufour, Anthony; Mašek, Ondřej; Aubriet, Frédéric

    2017-05-29

    The comprehensive description of complex mixtures such as bio-oils is required to understand and improve the different processes involved during biological, environmental or industrial operation. In this context, we have to consider how different ionization sources can improve a non-targeted approach. Thus, the Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) has been coupled to electrospray ionization (ESI), laser desorption ionization (LDI) and atmospheric pressure photoionization (APPI) to characterize an oak pyrolysis bio-oil. Close to 90% of the all 4500 compound formulae has been attributed to C x H y O z with similar oxygen class compound distribution. Nevertheless, their relative abundance in respect with their double bound equivalent (DBE) value has evidenced significant differences depending on the ion source used. ESI has allowed compounds with low DBE but more oxygen atoms to be ionized. APPI has demonstrated the efficient ionization of less polar compounds (high DBE values and less oxygen atoms). The LDI behavior of bio-oils has been considered intermediate in terms of DBE and oxygen amounts but it has also been demonstrated that a significant part of the features are specifically detected by this ionization method. Thus, the complementarity of three different ionization sources has been successfully demonstrated for the exhaustive characterization by petroleomic approach of a complex mixture. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Production of capsular polysaccharide from Escherichia coli K4 for biotechnological applications.

    PubMed

    Cimini, Donatella; Restaino, Odile Francesca; Catapano, Angela; De Rosa, Mario; Schiraldi, Chiara

    2010-02-01

    The production of industrially relevant microbial polysaccharides has recently gained much interest. The capsular polysaccharide of Escherichia coli K4 is almost identical to chondroitin, a commercially valuable biopolymer that is so far obtained from animal tissues entailing complex and expensive extraction procedures. In the present study, the production of capsular polysaccharide by E. coli K4 was investigated taking into consideration a potential industrial application. Strain physiology was first characterized in shake flask experiments to determine the optimal culture conditions for the growth of the microorganism and correlate it to polysaccharide production. Results show that the concentration of carbon source greatly affects polysaccharide production, while the complex nitrogen source is mainly responsible for the build up of biomass. Small-scale batch processes were performed to further evaluate the effect of the initial carbon source concentration and of growth temperatures on polysaccharide production, finally leading to the establishment of the medium to use in following fermentation experiments on a bigger scale. The fed-batch strategy next developed on a 2-L reactor resulted in a maximum cell density of 56 g(cww)/L and a titre of capsular polysaccharide equal to 1.4 g/L, approximately ten- and fivefold higher than results obtained in shake flask and 2-L batch experiments, respectively. The release kinetics of K4 polysaccharide into the medium were also explored to gain insight into the mechanisms underlying a complex aspect of the strain physiology.

  6. Time-reversal in geophysics: the key for imaging a seismic source, generating a virtual source or imaging with no source (Invited)

    NASA Astrophysics Data System (ADS)

    Tourin, A.; Fink, M.

    2010-12-01

    The concept of time-reversal (TR) focusing was introduced in acoustics by Mathias Fink in the early nineties: a pulsed wave is sent from a source, propagates in an unknown media and is captured at a transducer array termed a “Time Reversal Mirror (TRM)”. Then the waveforms received at each transducer are flipped in time and sent back resulting in a wave converging at the original source regardless of the complexity of the propagation medium. TRMs have now been implemented in a variety of physical scenarios from GHz microwaves to MHz ultrasonics and to hundreds of Hz in ocean acoustics. Common to this broad range of scales is a remarkable robustness exemplified by observations that the more complex the medium (random or chaotic), the sharper the focus. A TRM acts as an antenna that uses complex environments to appear wider than it is, resulting for a broadband pulse, in a refocusing quality that does not depend on the TRM aperture. We show that the time-reversal concept is also at the heart of very active research fields in seismology and applied geophysics: imaging of seismic sources, passive imaging based on noise correlations, seismic interferometry, monitoring of CO2 storage using the virtual source method. All these methods can indeed be viewed in a unified framework as an application of the so-called time-reversal cavity approach. That approach uses the fact that a wave field can be predicted at any location inside a volume (without source) from the knowledge of both the field and its normal derivative on the surrounding surface S, which for acoustic scalar waves is mathematically expressed in the Helmholtz Kirchhoff (HK) integral. Thus in the first step of an ideal TR process, the field coming from a point-like source as well as its normal derivative should be measured on S. In a second step, the initial source is removed and monopole and dipole sources reemit the time reversal of the components measured in the first step. Instead of directly computing the resulting HK integral along S, physical arguments can be used to straightforwardly predict that the time-reversed field in the cavity writes as the difference of advanced and retarded Green’s functions centred on the initial source position. This result is in some way disappointing because it means that reversing a field using a closed TRM is not enough to realize a perfect time-reversal experiment. In practical applications, the converging wave is always followed by a diverging one (see figure). However we will show that this result is of great importance since it furnishes the basis for imaging methods in media with no active source. We will focus more especially on the virtual source method showing that it can be used for implementing the DORT method (Decomposition of the time reversal operator) in a passive way. The passive DORT method could be interesting for monitoring changes in a complex scattering medium, for example in the context of CO2 storage. Time-reversal imaging applied to the giant Sumatra earthquake

  7. Software to Facilitate Remote Sensing Data Access for Disease Early Warning Systems

    PubMed Central

    Liu, Yi; Hu, Jiameng; Snell-Feikema, Isaiah; VanBemmel, Michael S.; Lamsal, Aashis; Wimberly, Michael C.

    2015-01-01

    Satellite remote sensing produces an abundance of environmental data that can be used in the study of human health. To support the development of early warning systems for mosquito-borne diseases, we developed an open-source, client based software application to enable the Epidemiological Applications of Spatial Technologies (EASTWeb). Two major design decisions were full automation of the discovery, retrieval and processing of remote sensing data from multiple sources, and making the system easily modifiable in response to changes in data availability and user needs. Key innovations that helped to achieve these goals were the implementation of a software framework for data downloading and the design of a scheduler that tracks the complex dependencies among multiple data processing tasks and makes the system resilient to external errors. EASTWeb has been successfully applied to support forecasting of West Nile virus outbreaks in the United States and malaria epidemics in the Ethiopian highlands. PMID:26644779

  8. Remote Sensing and Modeling of Landslides: Detection, Monitoring and Risk Evaluation

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia; Fukuoka, Hiroshi

    2012-01-01

    Landslides are one of the most pervasive hazards in the world, resulting in more fatalities and economic damage than is generally recognized_ Occurring over an extensive range of lithologies, morphologies, hydrologies, and climates, mass movements can be triggered by intense or prolonged rainfall, seismicity, freeze/thaw processes, and antbropogertic activities, among other factors. The location, size, and timing of these processes are characteristically difficult to predict and assess because of their localized spatial scales, distribution, and complex interactions between rainfall infiltration, hydromechanical properties of the soil, and the underlying surface composition. However, the increased availability, accessibility, and resolution of remote sensing data offer a new opportunity to explore issues of landslide susceptibility, hazard, and risk over a variety of spatial scales. This special issue presents a series of papers that investigate the sources, behavior, and impacts of different mass movement types using a diverse set of data sources and evaluation methodologies.

  9. The Rhythm of Fairall 9. I. Observing the Spectral Variability With XMM-Newton and NuSTAR

    NASA Technical Reports Server (NTRS)

    Lohfink, A. M.; Reynolds, S. C.; Pinto, C.; Alston, W.; Boggs, S. E.; Christensen, F. E.; Craig, W. W.; Fabian, A.C; Hailey, C. J.; Harrison, F. A.; hide

    2016-01-01

    We present a multi-epoch X-ray spectral analysis of the Seyfert 1 galaxy Fairall 9. Our analysis shows that Fairall 9 displays unique spectral variability in that its ratio residuals to a simple absorbed power law in the 0.510 keV band remain constant with time in spite of large variations in flux. This behavior implies an unchanging source geometry and the same emission processes continuously at work at the timescale probed. With the constraints from NuSTAR on the broad-band spectral shape, it is clear that the soft excess in this source is a superposition of two different processes, one being blurred ionized reflection in the innermost parts of the accretion disk, and the other a continuum component such as a spatially distinct Comptonizing region. Alternatively, a more complex primary Comptonization component together with blurred ionized reflection could be responsible.

  10. The biogeochemistry of anchialine caves: Progress and possibilities

    USGS Publications Warehouse

    Pohlman, John W.

    2011-01-01

    Recent investigations of anchialine caves and sinkholes have identified complex food webs dependent on detrital and, in some cases, chemosynthetically produced organic matter. Chemosynthetic microbes in anchialine systems obtain energy from reduced compounds produced during organic matter degradation (e.g., sulfide, ammonium, and methane), similar to what occurs in deep ocean cold seeps and mud volcanoes, but distinct from dominant processes operating at hydrothermal vents and sulfurous mineral caves where the primary energy source is mantle derived. This review includes case studies from both anchialine and non-anchialine habitats, where evidence for in situ chemosynthetic production of organic matter and its subsequent transfer to higher trophic level metazoans is documented. The energy sources and pathways identified are synthesized to develop conceptual models for elemental cycles and energy cascades that occur within oligotrophic and eutrophic anchialine caves. Strategies and techniques for testing the hypothesis of chemosynthesis as an active process in anchialine caves are also suggested.

  11. The Direct Lighting Computation in Global Illumination Methods

    NASA Astrophysics Data System (ADS)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  12. Hydrogen Isotopes in Amino Acids and Soils Offer New Potential to Study Complex Processes

    NASA Astrophysics Data System (ADS)

    Fogel, M. L.; Newsome, S. D.; Williams, E. K.; Bradley, C. J.; Griffin, P.; Nakamoto, B. J.

    2016-12-01

    Hydrogen isotopes have been analyzed extensively in the earth and biogeosciences to trace water through various environmental systems. The majority of the measurements have been made on water in rocks and minerals (inorganic) or non-exchangeable H in lipids (organic), important biomarkers that represent a small fraction of the organic molecules synthesized by living organisms. Our lab has been investigating hydrogen isotopes in amino acids and complex soil organic matter, which have traditionally been thought to be too complex to interpret owing to complications from potentially exchangeable hydrogen. For the amino acids, we show how hydrogen in amino acids originates from two sources, food and water, and demonstrate that hydrogen isotopes can be routed directly between organisms. Amino acid hydrogen isotopes may unravel cycling in extremophiles in order to discover novel biochemical pathways central to the organism. For soil organic matter, recent approaches to understanding the origin of soil organic matter are pointing towards root exudates along with microbial biomass as the source, rather than aboveground leaf litter. Having an isotope tracer in very complex, potentially exchangeable organic matter can be handled with careful experimentation. Although no new instrumentation is being used per se, extension of classes of organic matter to isotope measurements has potential to open up new doors for understanding organic matter cycling on earth and in planetary materials.

  13. Use of collagen hydrolysate as a complex nitrogen source for the synthesis of penicillin by Penicillium chrysogenum.

    PubMed

    Leonhartsberger, S; Lafferty, R M; Korneti, L

    1993-09-01

    Optimal conditions for both biomass formation and penicillin synthesis by a strain of Penicillium chrysogenum were determined when using a collagen-derived nitrogen source. Preliminary investigations were carried out in shaken flask cultures employing a planned experimental program termed the Graeco-Latin square technique (Auden et al., 1967). It was initially determined that up to 30% of a conventional complex nitrogen source such as cottonseed meal could be replaced by the collagen-derived nitrogen source without decreasing the productivity with respect to the penicillin yield. In the pilot scale experiments using a 30 l stirred tank type of bioreactor, higher penicillin yields were obtained when 70% of the conventional complex nitrogen source in the form of cottonseed meal was replaced by the collagen hydrolysate. Furthermore, the maximum rate of penicillin synthesis continued for over a longer period when using collagen hydrolysate as a complex nitrogen source. Penicillin synthesis rates were determined using a linear regression.

  14. The ALFALFA Extragalactic Catalog and Data Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.; Haynes, Martha P.; Giovanelli, Riccardo; ALFALFA Team

    2018-06-01

    The Arecibo Legacy Fast ALFA 21cm HI Survey has reached completion. The observations and data are used by team members and the astronomical community in a variety of scientific initiatives with gas-rich galaxies, cluster environments, and studies of low redshift cosmology. The survey covers nearly 7000 square degrees of high galactic latitude sky visible from Arecibo, Puerto Rico and ~4400 hours of observations from 2005 to 2011. We present the extragalactic HI source catalog of over ~31,000 detections, their measured properties, and associated derived parameters. The observations were carefully reduced using a custom made data reduction pipeline and interface. Team members interacted with this pipeline through observation planning, calibration, imaging, source extraction, and cataloging. We describe this processing workflow as it pertains to the complexities of the single-dish multi-feed data reduction as well as known caveats of the source catalog and spectra for use in future astronomical studies and analysis. The ALFALFA team at Cornell has been supported by NSF grants AST-0607007, AST-1107390 and AST-1714828 and by grants from the Brinson Foundation.

  15. Automated Text Markup for Information Retrieval from an Electronic Textbook of Infectious Disease

    PubMed Central

    Berrios, Daniel C.; Kehler, Andrew; Kim, David K.; Yu, Victor L.; Fagan, Lawrence M.

    1998-01-01

    The information needs of practicing clinicians frequently require textbook or journal searches. Making these sources available in electronic form improves the speed of these searches, but precision (i.e., the fraction of relevant to total documents retrieved) remains low. Improving the traditional keyword search by transforming search terms into canonical concepts does not improve search precision greatly. Kim et al. have designed and built a prototype system (MYCIN II) for computer-based information retrieval from a forthcoming electronic textbook of infectious disease. The system requires manual indexing by experts in the form of complex text markup. However, this mark-up process is time consuming (about 3 person-hours to generate, review, and transcribe the index for each of 218 chapters). We have designed and implemented a system to semiautomate the markup process. The system, information extraction for semiautomated indexing of documents (ISAID), uses query models and existing information-extraction tools to provide support for any user, including the author of the source material, to mark up tertiary information sources quickly and accurately.

  16. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 1: Model components for sources parameterization

    NASA Astrophysics Data System (ADS)

    Azzaro, Raffaele; Barberi, Graziella; D'Amico, Salvatore; Pace, Bruno; Peruzza, Laura; Tuvè, Tiziana

    2017-11-01

    The volcanic region of Mt. Etna (Sicily, Italy) represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA), the first results and maps of which are presented in a companion paper, Peruzza et al. (2017). The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades). The analysis of the frequency-magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude-size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool - FiSH (Pace et al., 2016) - that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be implemented in PSHA maps. They can be relevant for the retrofitting of the existing building stock and for driving risk reduction interventions. These analyses do not account for regional M > 6 seismogenic sources which dominate the hazard over long return times (≥ 500 years).

  17. Mission informed needed information: discoverable, available sensing sources (MINI-DASS): the operators and process flows the magic rabbits must negotiate

    NASA Astrophysics Data System (ADS)

    Kolodny, Michael A.

    2017-05-01

    Today's battlefield space is extremely complex, dealing with an enemy that is neither well-defined nor well-understood. Adversaries are comprised of widely-distributed, loosely-networked groups engaging in nefarious activities. Situational understanding is needed by decision makers; understanding of adversarial capabilities and intent is essential. Information needed at any time is dependent on the mission/task at hand. Information sources potentially providing mission-relevant information are disparate and numerous; they include sensors, social networks, fusion engines, internet, etc. Management of these multi-dimensional informational sources is critical. This paper will present a new approach being undertaken to answer the challenge of enhancing battlefield understanding by optimizing the utilization of available informational sources (means) to required missions/tasks as well as determining the "goodness'" of the information acquired in meeting the capabilities needed. Requirements are usually expressed in terms of a presumed technology solution (e.g., imagery). A metaphor of the "magic rabbits" was conceived to remove presumed technology solutions from requirements by claiming the "required" technology is obsolete. Instead, intelligent "magic rabbits" are used to provide needed information. The question then becomes: "WHAT INFORMATION DO YOU NEED THE RABBITS TO PROVIDE YOU?" This paper will describe a new approach called Mission-Informed Needed Information - Discoverable, Available Sensing Sources (MINI-DASS) that designs a process that builds information acquisition missions and determines what the "magic rabbits" need to provide in a manner that is machine understandable. Also described is the Missions and Means Framework (MMF) model used, the process flow utilized, the approach to developing an ontology of information source means and the approach for determining the value of the information acquired.

  18. Consistent Simulation Framework for Efficient Mass Discharge and Source Depletion Time Predictions of DNAPL Contaminants in Heterogeneous Aquifers Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Koch, J.

    2014-12-01

    Predicting DNAPL fate and transport in heterogeneous aquifers is challenging and subject to an uncertainty that needs to be quantified. Models for this task needs to be equipped with an accurate source zone description, i.e., the distribution of mass of all partitioning phases (DNAPL, water, and soil) in all possible states ((im)mobile, dissolved, and sorbed), mass-transfer algorithms, and the simulation of transport processes in the groundwater. Such detailed models tend to be computationally cumbersome when used for uncertainty quantification. Therefore, a selective choice of the relevant model states, processes, and scales are both sensitive and indispensable. We investigate the questions: what is a meaningful level of model complexity and how to obtain an efficient model framework that is still physically and statistically consistent. In our proposed model, aquifer parameters and the contaminant source architecture are conceptualized jointly as random space functions. The governing processes are simulated in a three-dimensional, highly-resolved, stochastic, and coupled model that can predict probability density functions of mass discharge and source depletion times. We apply a stochastic percolation approach as an emulator to simulate the contaminant source formation, a random walk particle tracking method to simulate DNAPL dissolution and solute transport within the aqueous phase, and a quasi-steady-state approach to solve for DNAPL depletion times. Using this novel model framework, we test whether and to which degree the desired model predictions are sensitive to simplifications often found in the literature. With this we identify that aquifer heterogeneity, groundwater flow irregularity, uncertain and physically-based contaminant source zones, and their mutual interlinkages are indispensable components of a sound model framework.

  19. Supercritical synthesis of biodiesel.

    PubMed

    Bernal, Juana M; Lozano, Pedro; García-Verdugo, Eduardo; Burguete, M Isabel; Sánchez-Gómez, Gregorio; López-López, Gregorio; Pucheault, Mathieu; Vaultier, Michel; Luis, Santiago V

    2012-07-23

    The synthesis of biodiesel fuel from lipids (vegetable oils and animal fats) has gained in importance as a possible source of renewable non-fossil energy in an attempt to reduce our dependence on petroleum-based fuels. The catalytic processes commonly used for the production of biodiesel fuel present a series of limitations and drawbacks, among them the high energy consumption required for complex purification operations and undesirable side reactions. Supercritical fluid (SCF) technologies offer an interesting alternative to conventional processes for preparing biodiesel. This review highlights the advances, advantages, drawbacks and new tendencies involved in the use of supercritical fluids (SCFs) for biodiesel synthesis.

  20. Validation and calibration of structural models that combine information from multiple sources.

    PubMed

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  1. Seismic Sources for the Territory of Georgia

    NASA Astrophysics Data System (ADS)

    Tsereteli, N. S.; Varazanashvili, O.

    2011-12-01

    The southern Caucasus is an earthquake prone region where devastating earthquakes have repeatedly caused significant loss of lives, infrastructure and buildings. High geodynamic activity of the region expressed in both seismic and aseismic deformations, is conditioned by the still-ongoing convergence of lithospheric plates and northward propagation of the Afro-Arabian continental block at a rate of several cm/year. The geometry of tectonic deformations in the region is largely determined by the wedge-shaped rigid Arabian block intensively intended into the relatively mobile Middle East-Caucasian region. Georgia is partner of ongoing regional project EMME. The main objective of EMME is calculation of Earthquake hazard uniformly with heights standards. One approach used in the project is the probabilistic seismic hazard assessment. In this approach the first parameter requirement is the definition of seismic source zones. Seismic sources can be either faults or area sources. Seismoactive structures of Georgia are identified mainly on the basis of the correlation between neotectonic structures of the region and earthquakes. Requirements of modern PSH software to geometry of faults is very high. As our knowledge of active faults geometry is not sufficient, area sources were used. Seismic sources are defined as zones that are characterized with more or less uniform seismicity. Poor knowledge of the processes occurring in deep of the Earth is connected with complexity of direct measurement. From this point of view the reliable data obtained from earthquake fault plane solution is unique for understanding the character of a current tectonic life of investigated area. There are two methods of identification if seismic sources. The first is the seimsotectonic approach, based on identification of extensive homogeneous seismic sources (SS) with the definition of probability of occurrence of maximum earthquake Mmax. In the second method the identification of seismic sources will be obtained on the bases of structural geology, parameters of seismicity and seismotectonics. This last approach was used by us. For achievement of this purpose it was necessary to solve following problems: to calculate the parameters of seismotectonic deformation; to reveal regularities in character of earthquake fault plane solution; use obtained regularities to develop principles of an establishment of borders between various hierarchical and scale levels of seismic deformations fields and to give their geological interpretation; Three dimensional matching of active faults with real geometrical dimension and earthquake sources have been investigated. Finally each zone have been defined with the parameters: the geometry, the magnitude-frequency parameters, maximum magnitude, and depth distribution as well as modern dynamical characteristics widely used for complex processes

  2. Lithological Influences on Occurrence of High-Fluoride Waters in The Central Kenya Rift

    NASA Astrophysics Data System (ADS)

    Olaka, L. A.; Musolff, A.; Mulch, A.; Olago, D.; Odada, E. O.

    2013-12-01

    Within the East African rift, groundwater recharge results from the complex interplay of geology, land cover, geomorphology, climate and on going volcano-tectonic processes across a broad range of spatial and temporal scales. The interrelationships between these factors create complex patterns of water availability, reliability and quality. The hydrochemical evolution of the waters is further complex due to the different climatic regimes and geothermal processes going on in this area. High fluoridic waters within the rift have been reported by few studies, while dental fluorosis is high among the inhabitants of the rift. The natural sources of fluoride in waters can be from weathering of fluorine bearing minerals in rocks, volcanic or fumarolic activities. Fluoride concentration in water depends on a number of factors including pH, temperature, time of water-rock formation contact and geochemical processes. Knowledge of the sources and dispersion of fluoride in both surface and groundwaters within the central Kenya rift and seasonal variations between wet and dry seasons is still poor. The Central Kenya rift is marked by active tectonics, volcanic activity and fumarolic activity, the rocks are majorly volcanics: rhyolites, tuffs, basalts, phonolites, ashes and agglomerates some are highly fractured. Major NW-SE faults bound the rift escarpment while the rift floor is marked by N-S striking faults We combine petrographic, hydrochemistry and structural information to determine the sources and enrichment pathways of high fluoridic waters within the Naivasha catchment. A total of 120 water samples for both the dry season (January-February2012) and after wet season (June-July 2013) from springs, rivers, lakes, hand dug wells, fumaroles and boreholes within the Naivasha catchment are collected and analysed for fluoride, physicochemical parameters and stable isotopes (δ2 H, δ18 O) in order to determine the origin and evolution of the waters. Additionally, 30 soil and rock samples were also collected and analysed for fluoride, and rock samples were subjected to petrographic investigations and X-ray diffraction. The fluoride levels in surface and groundwater for the dry season range from 0.019 - 50.14 mg/L, on average above the WHO permissible limit. The high fluoride occurs both in the lake and groundwater. Preliminary petrographic studies show considerable fluoride in micas. The study is on-going and plans to present the relative abundances of fluoride in the lithology as the sources and the fluoride enrichment pathways of the groundwater within the Central Kenya rift.

  3. ODI - Portal, Pipeline, and Archive (ODI-PPA): a web-based astronomical compute archive, visualization, and analysis service

    NASA Astrophysics Data System (ADS)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Harbeck, Daniel R.; Boroson, Todd; Liu, Wilson; Kotulla, Ralf; Shaw, Richard; Henschel, Robert; Rajagopal, Jayadev; Stobie, Elizabeth; Knezek, Patricia; Martin, R. Pierre; Archbold, Kevin

    2014-07-01

    The One Degree Imager-Portal, Pipeline, and Archive (ODI-PPA) is a web science gateway that provides astronomers a modern web interface that acts as a single point of access to their data, and rich computational and visualization capabilities. Its goal is to support scientists in handling complex data sets, and to enhance WIYN Observatory's scientific productivity beyond data acquisition on its 3.5m telescope. ODI-PPA is designed, with periodic user feedback, to be a compute archive that has built-in frameworks including: (1) Collections that allow an astronomer to create logical collations of data products intended for publication, further research, instructional purposes, or to execute data processing tasks (2) Image Explorer and Source Explorer, which together enable real-time interactive visual analysis of massive astronomical data products within an HTML5 capable web browser, and overlaid standard catalog and Source Extractor-generated source markers (3) Workflow framework which enables rapid integration of data processing pipelines on an associated compute cluster and users to request such pipelines to be executed on their data via custom user interfaces. ODI-PPA is made up of several light-weight services connected by a message bus; the web portal built using Twitter/Bootstrap, AngularJS and jQuery JavaScript libraries, and backend services written in PHP (using the Zend framework) and Python; it leverages supercomputing and storage resources at Indiana University. ODI-PPA is designed to be reconfigurable for use in other science domains with large and complex datasets, including an ongoing offshoot project for electron microscopy data.

  4. Imaging ultrafast excited state pathways in transition metal complexes by X-ray transient absorption and scattering using X-ray free electron laser source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Lin X.; Shelby, Megan L.; Lestrange, Patrick J.

    2016-01-01

    This report will describe our recent studies of transition metal complex structural dynamics on the fs and ps time scales using an X-ray free electron laser source, Linac Coherent Light Source (LCLS). Ultrafast XANES spectra at the Ni K-edge of nickel(II) tetramesitylporphyrin (NiTMP) were successfully measured for optically excited state at a timescale from 100 fs to 50 ps, providing insight into its sub-ps electronic and structural relaxation processes. Importantly, a transient reduced state Ni(I) (π, 3dx2-y2) electronic state is captured through the interpretation of a short-lived excited state absorption on the low-energy shoulder of the edge, which is aidedmore » by the computation of X-ray transitions for postulated excited electronic states. The observed and computed inner shell to valence orbital transition energies demonstrate and quantify the influence of electronic configuration on specific metal orbital energies. A strong influence of the valence orbital occupation on the inner shell orbital energies indicates that one should not use the transition energy from 1s to other orbitals to draw conclusions about the d-orbital energies. For photocatalysis, a transient electronic configuration could influence d-orbital energies up to a few eV and any attempt to steer the reaction pathway should account for this to ensure that external energies can be used optimally in driving desirable processes. NiTMP structural evolution and the influence of the porphyrin macrocycle conformation on relaxation kinetics can be likewise inferred from this study.« less

  5. Processing lunar soils for oxygen and other materials

    NASA Technical Reports Server (NTRS)

    Knudsen, Christian W.; Gibson, Michael A.

    1992-01-01

    Two types of lunar materials are excellent candidates for lunar oxygen production: ilmenite and silicates such as anorthite. Both are lunar surface minable, occurring in soils, breccias, and basalts. Because silicates are considerably more abundant than ilmenite, they may be preferred as source materials. Depending on the processing method chosen for oxygen production and the feedstock material, various useful metals and bulk materials can be produced as byproducts. Available processing techniques include hydrogen reduction of ilmenite and electrochemical and chemical reductions of silicates. Processes in these categories are generally in preliminary development stages and need significant research and development support to carry them to practical deployment, particularly as a lunar-based operation. The goal of beginning lunar processing operations by 2010 requires that planning and research and development emphasize the simplest processing schemes. However, more complex schemes that now appear to present difficult technical challenges may offer more valuable metal byproducts later. While they require more time and effort to perfect, the more complex or difficult schemes may provide important processing and product improvements with which to extend and elaborate the initial lunar processing facilities. A balanced R&D program should take this into account. The following topics are discussed: (1) ilmenite--semi-continuous process; (2) ilmenite--continuous fluid-bed reduction; (3) utilization of spent ilmenite to produce bulk materials; (4) silicates--electrochemical reduction; and (5) silicates--chemical reduction.

  6. [The heuristics of reaching a diagnosis].

    PubMed

    Wainstein, Eduardo

    2009-12-01

    Making a diagnosis in medicine is a complex process in which many cognitive and psychological issues are involved. After the first encounter with the patient, an unconscious process ensues to suspect the presence of a particular disease. Usually, complementary tests are requested to confirm the clinical suspicion. The interpretation of requested tests can be biased by the clinical diagnosis that was considered in the first encounter with the patient. The awareness of these sources of error is essential in the interpretation of the findings that will eventually lead to a final diagnosis. This article discusses some aspects of the heuristics involved in the adjudication of priory probabilities and provides a brief review of current concepts of the reasoning process.

  7. Eliciting design patterns for e-learning systems

    NASA Astrophysics Data System (ADS)

    Retalis, Symeon; Georgiakakis, Petros; Dimitriadis, Yannis

    2006-06-01

    Design pattern creation, especially in the e-learning domain, is a highly complex process that has not been sufficiently studied and formalized. In this paper, we propose a systematic pattern development cycle, whose most important aspects focus on reverse engineering of existing systems in order to elicit features that are cross-validated through the use of appropriate, authentic scenarios. However, an iterative pattern process is proposed that takes advantage of multiple data sources, thus emphasizing a holistic view of the teaching learning processes. The proposed schema of pattern mining has been extensively validated for Asynchronous Network Supported Collaborative Learning (ANSCL) systems, as well as for other types of tools in a variety of scenarios, with promising results.

  8. An adaptable architecture for patient cohort identification from diverse data sources

    PubMed Central

    Bache, Richard; Miles, Simon; Taweel, Adel

    2013-01-01

    Objective We define and validate an architecture for systems that identify patient cohorts for clinical trials from multiple heterogeneous data sources. This architecture has an explicit query model capable of supporting temporal reasoning and expressing eligibility criteria independently of the representation of the data used to evaluate them. Method The architecture has the key feature that queries defined according to the query model are both pre and post-processed and this is used to address both structural and semantic heterogeneity. The process of extracting the relevant clinical facts is separated from the process of reasoning about them. A specific instance of the query model is then defined and implemented. Results We show that the specific instance of the query model has wide applicability. We then describe how it is used to access three diverse data warehouses to determine patient counts. Discussion Although the proposed architecture requires greater effort to implement the query model than would be the case for using just SQL and accessing a data-based management system directly, this effort is justified because it supports both temporal reasoning and heterogeneous data sources. The query model only needs to be implemented once no matter how many data sources are accessed. Each additional source requires only the implementation of a lightweight adaptor. Conclusions The architecture has been used to implement a specific query model that can express complex eligibility criteria and access three diverse data warehouses thus demonstrating the feasibility of this approach in dealing with temporal reasoning and data heterogeneity. PMID:24064442

  9. A two-channel, spectrally degenerate polarization entangled source on chip

    NASA Astrophysics Data System (ADS)

    Sansoni, Linda; Luo, Kai Hong; Eigner, Christof; Ricken, Raimund; Quiring, Viktor; Herrmann, Harald; Silberhorn, Christine

    2017-12-01

    Integrated optics provides the platform for the experimental implementation of highly complex and compact circuits for quantum information applications. In this context integrated waveguide sources represent a powerful resource for the generation of quantum states of light due to their high brightness and stability. However, the confinement of the light in a single spatial mode limits the realization of multi-channel sources. Due to this challenge one of the most adopted sources in quantum information processes, i.e. a source which generates spectrally indistinguishable polarization entangled photons in two different spatial modes, has not yet been realized in a fully integrated platform. Here we overcome this limitation by suitably engineering two periodically poled waveguides and an integrated polarization splitter in lithium niobate. This source produces polarization entangled states with fidelity of F = 0.973 ±0.003 and a test of Bell's inequality results in a violation larger than 14 standard deviations. It can work both in pulsed and continuous wave regime. This device represents a new step toward the implementation of fully integrated circuits for quantum information applications.

  10. Subsurface Hydrology: Data Integration for Properties and Processes

    NASA Astrophysics Data System (ADS)

    Hyndman, David W.; Day-Lewis, Frederick D.; Singha, Kamini

    Groundwater is a critical resource and the PrinciPal source of drinking water for over 1.5 billion people. In 2001, the National Research Council cited as a "grand challenge" our need to understand the processes that control water movement in the subsurface. This volume faces that challenge in terms of data integration between complex, multi-scale hydrologie processes, and their links to other physical, chemical, and biological processes at multiple scales. Subsurface Hydrology: Data Integration for Properties and Processes presents the current state of the science in four aspects: • Approaches to hydrologie data integration • Data integration for characterization of hydrologie properties • Data integration for understanding hydrologie processes • Meta-analysis of current interpretations Scientists and researchers in the field, the laboratory, and the classroom will find this work an important resource in advancing our understanding of subsurface water movement.

  11. Coupled disease-behavior dynamics on complex networks: A review

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Andrews, Michael A.; Wu, Zhi-Xi; Wang, Lin; Bauch, Chris T.

    2015-12-01

    It is increasingly recognized that a key component of successful infection control efforts is understanding the complex, two-way interaction between disease dynamics and human behavioral and social dynamics. Human behavior such as contact precautions and social distancing clearly influence disease prevalence, but disease prevalence can in turn alter human behavior, forming a coupled, nonlinear system. Moreover, in many cases, the spatial structure of the population cannot be ignored, such that social and behavioral processes and/or transmission of infection must be represented with complex networks. Research on studying coupled disease-behavior dynamics in complex networks in particular is growing rapidly, and frequently makes use of analysis methods and concepts from statistical physics. Here, we review some of the growing literature in this area. We contrast network-based approaches to homogeneous-mixing approaches, point out how their predictions differ, and describe the rich and often surprising behavior of disease-behavior dynamics on complex networks, and compare them to processes in statistical physics. We discuss how these models can capture the dynamics that characterize many real-world scenarios, thereby suggesting ways that policy makers can better design effective prevention strategies. We also describe the growing sources of digital data that are facilitating research in this area. Finally, we suggest pitfalls which might be faced by researchers in the field, and we suggest several ways in which the field could move forward in the coming years.

  12. A Precise Visual Method for Narrow Butt Detection in Specular Reflection Workpiece Welding

    PubMed Central

    Zeng, Jinle; Chang, Baohua; Du, Dong; Hong, Yuxiang; Chang, Shuhe; Zou, Yirong

    2016-01-01

    During the complex path workpiece welding, it is important to keep the welding torch aligned with the groove center using a visual seam detection method, so that the deviation between the torch and the groove can be corrected automatically. However, when detecting the narrow butt of a specular reflection workpiece, the existing methods may fail because of the extremely small groove width and the poor imaging quality. This paper proposes a novel detection method to solve these issues. We design a uniform surface light source to get high signal-to-noise ratio images against the specular reflection effect, and a double-line laser light source is used to obtain the workpiece surface equation relative to the torch. Two light sources are switched on alternately and the camera is synchronized to capture images when each light is on; then the position and pose between the torch and the groove can be obtained nearly at the same time. Experimental results show that our method can detect the groove effectively and efficiently during the welding process. The image resolution is 12.5 μm and the processing time is less than 10 ms per frame. This indicates our method can be applied to real-time narrow butt detection during high-speed welding process. PMID:27649173

  13. A Precise Visual Method for Narrow Butt Detection in Specular Reflection Workpiece Welding.

    PubMed

    Zeng, Jinle; Chang, Baohua; Du, Dong; Hong, Yuxiang; Chang, Shuhe; Zou, Yirong

    2016-09-13

    During the complex path workpiece welding, it is important to keep the welding torch aligned with the groove center using a visual seam detection method, so that the deviation between the torch and the groove can be corrected automatically. However, when detecting the narrow butt of a specular reflection workpiece, the existing methods may fail because of the extremely small groove width and the poor imaging quality. This paper proposes a novel detection method to solve these issues. We design a uniform surface light source to get high signal-to-noise ratio images against the specular reflection effect, and a double-line laser light source is used to obtain the workpiece surface equation relative to the torch. Two light sources are switched on alternately and the camera is synchronized to capture images when each light is on; then the position and pose between the torch and the groove can be obtained nearly at the same time. Experimental results show that our method can detect the groove effectively and efficiently during the welding process. The image resolution is 12.5 μm and the processing time is less than 10 ms per frame. This indicates our method can be applied to real-time narrow butt detection during high-speed welding process.

  14. DBCC Software as Database for Collisional Cross-Sections

    NASA Astrophysics Data System (ADS)

    Moroz, Daniel; Moroz, Paul

    2014-10-01

    Interactions of species, such as atoms, radicals, molecules, electrons, and photons, in plasmas used for materials processing could be very complex, and many of them could be described in terms of collisional cross-sections. Researchers involved in plasma simulations must select reasonable cross-sections for collisional processes for implementing them into their simulation codes to be able to correctly simulate plasmas. However, collisional cross-section data are difficult to obtain, and, for some collisional processes, the cross-sections are still not known. Data on collisional cross-sections can be obtained from numerous sources including numerical calculations, experiments, journal articles, conference proceedings, scientific reports, various universities' websites, national labs and centers specifically devoted to collecting data on cross-sections. The cross-sections data received from different sources could be partial, corresponding to limited energy ranges, or could even not be in agreement. The DBCC software package was designed to help researchers in collecting, comparing, and selecting cross-sections, some of which could be constructed from others or chosen as defaults. This is important as different researchers may place trust in different cross-sections or in different sources. We will discuss the details of DBCC and demonstrate how it works and why it is beneficial to researchers working on plasma simulations.

  15. A simple distributed sediment delivery approach for rural catchments

    NASA Astrophysics Data System (ADS)

    Reid, Lucas; Scherer, Ulrike

    2014-05-01

    The transfer of sediments from source areas to surface waters is a complex process. In process based erosion models sediment input is thus quantified by representing all relevant sub processes such as detachment, transport and deposition of sediment particles along the flow path to the river. A successful application of these models requires, however, a large amount of spatially highly resolved data on physical catchment characteristics, which is only available for a few, well examined small catchments. For the lack of appropriate models, the empirical Universal Soil Loss Equation (USLE) is widely applied to quantify the sediment production in meso to large scale basins. As the USLE provides long-term mean soil loss rates, it is often combined with spatially lumped models to estimate the sediment delivery ratio (SDR). In these models, the SDR is related to data on morphological characteristics of the catchment such as average local relief, drainage density, proportion of depressions or soil texture. Some approaches include the relative distance between sediment source areas and the river channels. However, several studies showed that spatially lumped parameters describing the morphological characteristics are only of limited value to represent the factors of influence on sediment transport at the catchment scale. Sediment delivery is controlled by the location of the sediment source areas in the catchment and the morphology along the flow path to the surface water bodies. This complex interaction of spatially varied physiographic characteristics cannot be adequately represented by lumped morphological parameters. The objective of this study is to develop a simple but spatially distributed approach to quantify the sediment delivery ratio by considering the characteristics of the flow paths in a catchment. We selected a small catchment located in in an intensively cultivated loess region in Southwest Germany as study area for the development of the SDR approach. The flow pathways were extracted in a geographic information system. Then the sediment delivery ratio for each source area was determined using an empirical approach considering the slope, morphology and land use properties along the flow path. As a benchmark for the calibration of the model parameters we used results of a detailed process based erosion model available for the study area. Afterwards the approach was tested in larger catchments located in the same loess region.

  16. Characterizing hyporheic exchange processes using high-frequency electrical conductivity-discharge relationships on subhourly to interannual timescales

    NASA Astrophysics Data System (ADS)

    Singley, Joel G.; Wlostowski, Adam N.; Bergstrom, Anna J.; Sokol, Eric R.; Torrens, Christa L.; Jaros, Chris; Wilson, Colleen E.; Hendrickson, Patrick J.; Gooseff, Michael N.

    2017-05-01

    Concentration-discharge (C-Q) relationships are often used to quantify source water contributions and biogeochemical processes occurring within catchments, especially during discrete hydrological events. Yet, the interpretation of C-Q hysteresis is often confounded by complexity of the critical zone, such as numerous source waters and hydrochemical nonstationarity. Consequently, researchers must often ignore important runoff pathways and geochemical sources/sinks, especially the hyporheic zone because it lacks a distinct hydrochemical signature. Such simplifications limit efforts to identify processes responsible for the transience of C-Q hysteresis over time. To address these limitations, we leverage the hydrologic simplicity and long-term, high-frequency Q and electrical conductivity (EC) data from streams in the McMurdo Dry Valleys, Antarctica. In this two end-member system, EC can serve as a proxy for the concentration of solutes derived from the hyporheic zone. We utilize a novel approach to decompose loops into subhysteretic EC-Q dynamics to identify individual mechanisms governing hysteresis across a wide range of timescales. We find that hydrologic and hydraulic processes govern EC response to diel and seasonal Q variability and that the effects of hyporheic mixing processes on C-Q transience differ in short and long streams. We also observe that variable hyporheic turnover rates govern EC-Q patterns at daily to interannual timescales. Last, subhysteretic analysis reveals a period of interannual freshening of glacial meltwater streams related to the effects of unsteady flow on hyporheic exchange. The subhysteretic analysis framework we introduce may be applied more broadly to constrain the processes controlling C-Q transience and advance understanding of catchment evolution.

  17. Nonevaporable getter coating chambers for extreme high vacuum

    DOE PAGES

    Stutzman, Marcy L.; Adderley, Philip A.; Mamun, Md Abdullah Al; ...

    2018-03-01

    Techniques for NEG coating a large diameter chamber are presented along with vacuum measurements in the chamber using several pumping configurations, with base pressure as low as 1.56x10^-12 Torr (N2 equivalent) with only a NEG coating and small ion pump. We then describe modifications to the NEG coating process to coat complex geometry chambers for ultra-cold atom trap experiments. Surface analysis of NEG coated samples are used to measure composition and morphology of the thin films. Finally, pressure measurements are compared for two NEG coated polarized electron source chambers: the 130 kV polarized electron source at Jefferson Lab and themore » upgraded 350 kV polarized 2 electron source, both of which are approaching or within the extreme high vacuum (XHV) range, defined as P<7.5x10^-13 Torr.« less

  18. Nonevaporable getter coating chambers for extreme high vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stutzman, Marcy L.; Adderley, Philip A.; Mamun, Md Abdullah Al

    Techniques for NEG coating a large diameter chamber are presented along with vacuum measurements in the chamber using several pumping configurations, with base pressure as low as 1.56x10^-12 Torr (N2 equivalent) with only a NEG coating and small ion pump. We then describe modifications to the NEG coating process to coat complex geometry chambers for ultra-cold atom trap experiments. Surface analysis of NEG coated samples are used to measure composition and morphology of the thin films. Finally, pressure measurements are compared for two NEG coated polarized electron source chambers: the 130 kV polarized electron source at Jefferson Lab and themore » upgraded 350 kV polarized 2 electron source, both of which are approaching or within the extreme high vacuum (XHV) range, defined as P<7.5x10^-13 Torr.« less

  19. Contracting for Weapon System Software: The Pricing Arrangement.

    DTIC Science & Technology

    1985-04-01

    considerations. 4E-rn iricacts the suitability of a particular contract type. i r-rment Process -" qhi,’ specialized and complex contract law regulates bo,ernment... Contract Law , and professional journals tnat have StLit.> C’? ar lyses of the federal procurement system and its application. Sources for the acquisition...Documents .14. U.S. Air Force Systems Command. Government Contract Law for engineers (3rd Edition). Los Angeles AFS, California: Space and Missile

  20. Electromagnetic disturbance of electric drive system signal is extracted based on PLS

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Wang, Chuanqi; Yang, Weidong; Zhang, Xu; Jiang, Li; Hou, Shuai; Chen, Xichen

    2018-05-01

    At present ISO11452 and GB/T33014 specified by electromagnetic immunity are narrowband electromagnetic radiation, but our exposure to electromagnetic radiation at ordinary times is not only a narrowband electromagnetic radiation, and some broadband electromagnetic radiation, and even some of the more complex electromagnetic environment. In terms of Electric vehicles, electric drive system is a kind of complex electromagnetic disturbance source, is not only a narrow-band signal, there are a lot of broadband signal, this paper puts forward PLS data processing method is adopted to analyze the electric drive system of electromagnetic disturbance, this kind of method to extract the data can be provide reliable data support for future standards.

  1. The Cadiz margin study off Spain: An introduction

    USGS Publications Warehouse

    Nelson, C.H.; Maldonado, A.

    1999-01-01

    The Cadiz continental margin of the northeastern Gulf of Cadiz off Spain was selected for a multidisciplinary project because of the interplay of complex tectonic history between the Iberian and African plates, sediment supply from multiple sources, and unique Mediterranean Gateway inflow and outflow currents. The nature of this complex margin, particularly during the last 5 million years, was investigated with emphasis on tectonic history, stratigraphic sequences, marine circulation, contourite depositional facies, geotechnical properties, geologic hazards, and human influences such as dispersal of river contaminants. This study provides an integrated view of the tectonic, sediment supply and oceanographic factors that control depositional processes and growth patterns of the Cadiz and similar modem and ancient continental margins.

  2. RFI Detection and Mitigation using Independent Component Analysis as a Pre-Processor

    NASA Technical Reports Server (NTRS)

    Schoenwald, Adam J.; Gholian, Armen; Bradley, Damon C.; Wong, Mark; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.

    2016-01-01

    Radio-frequency interference (RFI) has negatively impacted scientific measurements of passive remote sensing satellites. This has been observed in the L-band radiometers Soil Moisture and Ocean Salinity (SMOS), Aquarius and more recently, Soil Moisture Active Passive (SMAP). RFI has also been observed at higher frequencies such as K band. Improvements in technology have allowed wider bandwidth digital back ends for passive microwave radiometry. A complex signal kurtosis radio frequency interference detector was developed to help identify corrupted measurements. This work explores the use of Independent Component Analysis (ICA) as a blind source separation (BSS) technique to pre-process radiometric signals for use with the previously developed real and complex signal kurtosis detectors.

  3. [The concept of the development of the state of chemical-analytical environmental monitoring].

    PubMed

    Rakhmanin, Iu A; Malysheva, A G

    2013-01-01

    Chemical and analytical monitoring of the quality of environment is based on the accounting of the trace amount of substances. Considering the multicomponent composition of the environment and running processes of transformation of substances in it, in determination of the danger of the exposure to the chemical pollution of environment on population health there is necessary evaluation based on the simultaneous account of complex of substances really contained in the environment and supplying from different sources. Therefore, in the analytical monitoring of the quality and safety of the environment there is a necessary conversion from the orientation, based on the investigation of specific target substances, to estimation of real complex of compounds.

  4. Contaminant source identification using semi-supervised machine learning

    NASA Astrophysics Data System (ADS)

    Vesselinov, Velimir V.; Alexandrov, Boian S.; O'Malley, Daniel

    2018-05-01

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).

  5. Contaminant source identification using semi-supervised machine learning

    DOE PAGES

    Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan

    2017-11-08

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may needmore » to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).« less

  6. Contaminant source identification using semi-supervised machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may needmore » to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).« less

  7. Microanalysis characterization of bioactive protein-bound polysaccharides produced by Amanita ponderosa cultures.

    PubMed

    Salvador, Cátia; Martins, M Rosário; Caldeira, A Teresa

    2015-02-01

    Different compounds of edible mushrooms are responsible for their bioactivity. The ability to synthesize polysaccharides, namely protein-polysaccharide (PPS) complexes, is related to the antioxidant capacity of these compounds and present great interest in preventing a number of diseases, including cancer, cardiovascular and auto-immune diseases, and accelerated aging. Amanita ponderosa are wild edible mushrooms that grow in Mediterranean "montado" areas [Portuguese name given to cork oak (Quercus suber) and holm oak (Quercus ilex) forests]. The aim of this study was to evaluate the production of PPS complexes obtained from A. ponderosa cultures using a new microanalytical approach to quickly and easily monitor the production process. Microanalysis using Fourier-transform infrared using attenuated total reflection and Raman spectroscopy of PPS samples showed spectra compatible with identification of this type of compound in culture extracts. PPS separated by size-exclusion chromatography showed seven main complexes. Molecular weights of the main PPS complexes isolated from cultures ranged between 1.5 and 20 kDa and did not present toxicity against Artemia salina, demonstrating the potential of A. ponderosa as a source of biologically active compounds with nutraceutical value. Application of this microanalytical approach to monitoring the production of PPS compounds can be successfully applied in biotechnological processes.

  8. The Global Food System as a Transport Pathway for Hazardous Chemicals: The Missing Link between Emissions and Exposure.

    PubMed

    Ng, Carla A; von Goetz, Natalie

    2017-01-01

    Food is a major pathway for human exposure to hazardous chemicals. The modern food system is becoming increasingly complex and globalized, but models for food-borne exposure typically assume locally derived diets or use concentrations directly measured in foods without accounting for food origin. Such approaches may not reflect actual chemical intakes because concentrations depend on food origin, and representative analysis is seldom available. Processing, packaging, storage, and transportation also impart different chemicals to food and are not yet adequately addressed. Thus, the link between environmental emissions and realistic human exposure is effectively broken. We discuss the need for a fully integrated treatment of the modern industrialized food system, and we propose strategies for using existing models and relevant supporting data sources to track chemicals during production, processing, packaging, storage, and transport. Fate and bioaccumulation models describe how chemicals distribute in the environment and accumulate through local food webs. Human exposure models can use concentrations in food to determine body burdens based on individual or population characteristics. New models now include the impacts of processing and packaging but are far from comprehensive. We propose to close the gap between emissions and exposure by utilizing a wider variety of models and data sources, including global food trade data, processing, and packaging models. A comprehensive approach that takes into account the complexity of the modern global food system is essential to enable better prediction of human exposure to chemicals in food, sound risk assessments, and more focused risk abatement strategies. Citation: Ng CA, von Goetz N. 2017. The global food system as a transport pathway for hazardous chemicals: the missing link between emissions and exposure. Environ Health Perspect 125:1-7; http://dx.doi.org/10.1289/EHP168.

  9. Generation of large scale urban environments to support advanced sensor and seeker simulation

    NASA Astrophysics Data System (ADS)

    Giuliani, Joseph; Hershey, Daniel; McKeown, David, Jr.; Willis, Carla; Van, Tan

    2009-05-01

    One of the key aspects for the design of a next generation weapon system is the need to operate in cluttered and complex urban environments. Simulation systems rely on accurate representation of these environments and require automated software tools to construct the underlying 3D geometry and associated spectral and material properties that are then formatted for various objective seeker simulation systems. Under an Air Force Small Business Innovative Research (SBIR) contract, we have developed an automated process to generate 3D urban environments with user defined properties. These environments can be composed from a wide variety of source materials, including vector source data, pre-existing 3D models, and digital elevation models, and rapidly organized into a geo-specific visual simulation database. This intermediate representation can be easily inspected in the visible spectrum for content and organization and interactively queried for accuracy. Once the database contains the required contents, it can then be exported into specific synthetic scene generation runtime formats, preserving the relationship between geometry and material properties. To date an exporter for the Irma simulation system developed and maintained by AFRL/Eglin has been created and a second exporter to Real Time Composite Hardbody and Missile Plume (CHAMP) simulation system for real-time use is currently being developed. This process supports significantly more complex target environments than previous approaches to database generation. In this paper we describe the capabilities for content creation for advanced seeker processing algorithms simulation and sensor stimulation, including the overall database compilation process and sample databases produced and exported for the Irma runtime system. We also discuss the addition of object dynamics and viewer dynamics within the visual simulation into the Irma runtime environment.

  10. Frequency-Dependent Rupture Processes for the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Miyake, H.

    2012-12-01

    The 2011 Tohoku earthquake is characterized by frequency-dependent rupture process [e.g., Ide et al., 2011; Wang and Mori, 2011; Yao et al., 2011]. For understanding rupture dynamics of this earthquake, it is extremely important to investigate wave-based source inversions for various frequency bands. The above frequency-dependent characteristics have been derived from teleseismic analyses. This study challenges to infer frequency-dependent rupture processes from strong motion waveforms of K-NET and KiK-net stations. The observations suggested three or more S-wave phases, and ground velocities at several near-source stations showed different arrivals of their long- and short-period components. We performed complex source spectral inversions with frequency-dependent phase weighting developed by Miyake et al. [2002]. The technique idealizes both the coherent and stochastic summation of waveforms using empirical Green's functions. Due to the limitation of signal-to-noise ratio of the empirical Green's functions, the analyzed frequency bands were set within 0.05-10 Hz. We assumed a fault plane with 480 km in length by 180 km in width with a single time window for rupture following Koketsu et al. [2011] and Asano and Iwata [2012]. The inversion revealed source ruptures expanding from the hypocenter, and generated sharp slip-velocity intensities at the down-dip edge. In addition to test the effects of empirical/hybrid Green's functions and with/without rupture front constraints on the inverted solutions, we will discuss distributions of slip-velocity intensity and a progression of wave generation with increasing frequency.

  11. Evolution of chemical and isotopic composition of inorganic carbon in a complex semi-arid zone environment: Consequences for groundwater dating using radiocarbon

    NASA Astrophysics Data System (ADS)

    Meredith, K. T.; Han, L. F.; Hollins, S. E.; Cendón, D. I.; Jacobsen, G. E.; Baker, A.

    2016-09-01

    Estimating groundwater age is important for any groundwater resource assessment and radiocarbon (14C) dating of dissolved inorganic carbon (DIC) can provide this information. In semi-arid zone (i.e. water-limited environments), there are a multitude of reasons why 14C dating of groundwater and traditional correction models may not be directly transferable. Some include; (1) the complex hydrological responses of these systems that lead to a mixture of different ages in the aquifer(s), (2) the varied sources, origins and ages of organic matter in the unsaturated zone and (3) high evaporation rates. These all influence the evolution of DIC and are not easily accounted for in traditional correction models. In this study, we determined carbon isotope data for; DIC in water, carbonate minerals in the sediments, sediment organic matter, soil gas CO2 from the unsaturated zone, and vegetation samples. The samples were collected after an extended drought, and again after a flood event, to capture the evolution of DIC after varying hydrological regimes. A graphical method (Han et al., 2012) was applied for interpretation of the carbon geochemical and isotopic data. Simple forward mass-balance modelling was carried out on key geochemical processes involving carbon and agreed well with observed data. High values of DIC and δ13CDIC, and low 14CDIC could not be explained by a simple carbonate mineral-CO2 gas dissolution process. Instead it is suggested that during extended drought, water-sediment interaction leads to ion exchange processes within the top ∼10-20 m of the aquifer which promotes greater calcite dissolution in saline groundwater. This process was found to contribute more than half of the DIC, which is from a mostly 'dead' carbon source. DIC is also influenced by carbon exchange between DIC in water and carbonate minerals found in the top 2 m of the unsaturated zone. This process occurs because of repeated dissolution/precipitation of carbonate that is dependent on the water salinity driven by drought and periodic flooding conditions. This study shows that although 14C cannot be directly applied as a dating tool in some circumstances, carbon geochemical/isotopic data can be useful in hydrological investigations related to identifying groundwater sources, mixing relations, recharge processes, geochemical evolution, and interaction with surface water.

  12. Far-field DOA estimation and source localization for different scenarios in a distributed sensor network

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz

    Recent developments in the integrated circuits and wireless communications not only open up many possibilities but also introduce challenging issues for the collaborative processing of signals for source localization and beamforming in an energy-constrained distributed sensor network. In signal processing, various sensor array processing algorithms and concepts have been adopted, but must be further tailored to match the communication and computational constraints. Sometimes the constraints are such that none of the existing algorithms would be an efficient option for the defined problem and as the result; the necessity of developing a new algorithm becomes undeniable. In this dissertation, we present the theoretical and the practical issues of Direction-Of-Arrival (DOA) estimation and source localization using the Approximate-Maximum-Likelihood (AML) algorithm for different scenarios. We first investigate a robust algorithm design for coherent source DOA estimation in a limited reverberant environment. Then, we provide a least-square (LS) solution for source localization based on our newly proposed virtual array model. In another scenario, we consider the determination of the location of a disturbance source which emits both wideband acoustic and seismic signals. We devise an enhanced AML algorithm to process the data collected at the acoustic sensors. For processing the seismic signals, two distinct algorithms are investigated to determine the DOAs. Then, we consider a basic algorithm for fusion of the results yielded by the acoustic and seismic arrays. We also investigate the theoretical and practical issues of DOA estimation in a three-dimensional (3D) scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. In this dissertation, for each scenario, efficient numerical implementations of the corresponding AML algorithm are derived and applied into a real-time sensor network testbed. Extensive simulations as well as experimental results are presented to verify the effectiveness of the proposed algorithms.

  13. Imaging the complex geometry of a magma reservoir using FEM-based linear inverse modeling of InSAR data: application to Rabaul Caldera, Papua New Guinea

    NASA Astrophysics Data System (ADS)

    Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martì Molist, Joan

    2017-06-01

    We test an innovative inversion scheme using Green's functions from an array of pressure sources embedded in finite-element method (FEM) models to image, without assuming an a-priori geometry, the composite and complex shape of a volcano deformation source. We invert interferometric synthetic aperture radar (InSAR) data to estimate the pressurization and shape of the magma reservoir of Rabaul caldera, Papua New Guinea. The results image the extended shallow magmatic system responsible for a broad and long-term subsidence of the caldera between 2007 February and 2010 December. Elastic FEM solutions are integrated into the regularized linear inversion of InSAR data of volcano surface displacements in order to obtain a 3-D image of the source of deformation. The Green's function matrix is constructed from a library of forward line-of-sight displacement solutions for a grid of cubic elementary deformation sources. Each source is sequentially generated by removing the corresponding cubic elements from a common meshed domain and simulating the injection of a fluid mass flux into the cavity, which results in a pressurization and volumetric change of the fluid-filled cavity. The use of a single mesh for the generation of all FEM models avoids the computationally expensive process of non-linear inversion and remeshing a variable geometry domain. Without assuming an a-priori source geometry other than the configuration of the 3-D grid that generates the library of Green's functions, the geodetic data dictate the geometry of the magma reservoir as a 3-D distribution of pressure (or flux of magma) within the source array. The inversion of InSAR data of Rabaul caldera shows a distribution of interconnected sources forming an amorphous, shallow magmatic system elongated under two opposite sides of the caldera. The marginal areas at the sides of the imaged magmatic system are the possible feeding reservoirs of the ongoing Tavurvur volcano eruption of andesitic products on the east side and of the past Vulcan volcano eruptions of more evolved materials on the west side. The interconnection and spatial distributions of sources correspond to the petrography of the volcanic products described in the literature and to the dynamics of the single and twin eruptions that characterize the caldera. The ability to image the complex geometry of deformation sources in both space and time can improve our ability to monitor active volcanoes, widen our understanding of the dynamics of active volcanic systems and improve the predictions of eruptions.

  14. Capturing domain knowledge from multiple sources: the rare bone disorders use case.

    PubMed

    Groza, Tudor; Tudorache, Tania; Robinson, Peter N; Zankl, Andreas

    2015-01-01

    Lately, ontologies have become a fundamental building block in the process of formalising and storing complex biomedical information. The community-driven ontology curation process, however, ignores the possibility of multiple communities building, in parallel, conceptualisations of the same domain, and thus providing slightly different perspectives on the same knowledge. The individual nature of this effort leads to the need of a mechanism to enable us to create an overarching and comprehensive overview of the different perspectives on the domain knowledge. We introduce an approach that enables the loose integration of knowledge emerging from diverse sources under a single coherent interoperable resource. To accurately track the original knowledge statements, we record the provenance at very granular levels. We exemplify the approach in the rare bone disorders domain by proposing the Rare Bone Disorders Ontology (RBDO). Using RBDO, researchers are able to answer queries, such as: "What phenotypes describe a particular disorder and are common to all sources?" or to understand similarities between disorders based on divergent groupings (classifications) provided by the underlying sources. RBDO is available at http://purl.org/skeletome/rbdo. In order to support lightweight query and integration, the knowledge captured by RBDO has also been made available as a SPARQL Endpoint at http://bio-lark.org/se_skeldys.html.

  15. Best geoscience approach to complex systems in environment

    NASA Astrophysics Data System (ADS)

    Mezemate, Yacine; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2017-04-01

    The environment is a social issue that continues to grow in importance. Its complexity, both cross-disciplinary and multi-scale, has given rise to a large number of scientific and technological locks, that complex systems approaches can solve. Significant challenges must met to achieve the understanding of the environmental complexes systems. There study should proceed in some steps in which the use of data and models is crucial: - Exploration, observation and basic data acquisition - Identification of correlations, patterns, and mechanisms - Modelling - Model validation, implementation and prediction - Construction of a theory Since the e-learning becomes a powerful tool for knowledge and best practice shearing, we use it to teach the environmental complexities and systems. In this presentation we promote the e-learning course dedicated for a large public (undergraduates, graduates, PhD students and young scientists) which gather and puts in coherence different pedagogical materials of complex systems and environmental studies. This course describes a complex processes using numerous illustrations, examples and tests that make it "easy to enjoy" learning process. For the seek of simplicity, the course is divided in different modules and at the end of each module a set of exercises and program codes are proposed for a best practice. The graphical user interface (GUI) which is constructed using an open source Opale Scenari offers a simple navigation through the different module. The course treats the complex systems that can be found in environment and their observables, we particularly highlight the extreme variability of these observables over a wide range of scales. Using the multifractal formalism through different applications (turbulence, precipitation, hydrology) we demonstrate how such extreme variability of the geophysical/biological fields should be used solving everyday (geo-)environmental chalenges.

  16. A computational framework for modeling targets as complex adaptive systems

    NASA Astrophysics Data System (ADS)

    Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh

    2017-05-01

    Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.

  17. Collimating lens for light-emitting-diode light source based on non-imaging optics.

    PubMed

    Wang, Guangzhen; Wang, Lili; Li, Fuli; Zhang, Gongjian

    2012-04-10

    A collimating lens for a light-emitting-diode (LED) light source is an essential device widely used in lighting engineering. Lens surfaces are calculated by geometrical optics and nonimaging optics. This design progress does not rely on any software optimization and any complex iterative process. This method can be used for any type of light source not only Lambertian. The theoretical model is based on point source. But the practical LED source has a certain size. So in the simulation, an LED chip whose size is 1 mm*1 mm is used to verify the feasibility of the model. The mean results show that the lenses have a very compact structure and good collimating performance. Efficiency is defined as the ratio of the flux in the illuminated plane to the flux from LED source without considering the lens material transmission. Just investigating the loss in the designed lens surfaces, the two types of lenses have high efficiencies of more than 90% and 99%, respectively. Most lighting area (possessing 80% flux) radii are no more than 5 m when the illuminated plane is 200 m away from the light source.

  18. Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle

    PubMed Central

    Isomura, Takuya; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-01-01

    Blind source separation is the computation underlying the cocktail party effect––a partygoer can distinguish a particular talker’s voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes’ principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico) demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle. PMID:26690814

  19. Predictive Big Data Analytics: A Study of Parkinson’s Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations

    PubMed Central

    Dinov, Ivo D.; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W.; Price, Nathan D.; Van Horn, John D.; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M.; Dauer, William; Toga, Arthur W.

    2016-01-01

    Background A unique archive of Big Data on Parkinson’s Disease is collected, managed and disseminated by the Parkinson’s Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson’s disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data–large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources–all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Methods and Findings Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson’s disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Conclusions Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson’s disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer’s, Huntington’s, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications. PMID:27494614

  20. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  1. BigDataScript: a scripting language for data pipelines.

    PubMed

    Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu

    2015-01-01

    The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. © The Author 2014. Published by Oxford University Press.

  2. Simulation of metal additive manufacturing microstructures using kinetic Monte Carlo

    DOE PAGES

    Rodgers, Theron M.; Madison, Jonathan D.; Tikare, Veena

    2017-04-19

    Additive manufacturing (AM) is of tremendous interest given its ability to realize complex, non-traditional geometries in engineered structural materials. But, microstructures generated from AM processes can be equally, if not more, complex than their conventionally processed counterparts. While some microstructural features observed in AM may also occur in more traditional solidification processes, the introduction of spatially and temporally mobile heat sources can result in significant microstructural heterogeneity. While grain size and shape in metal AM structures are understood to be highly dependent on both local and global temperature profiles, the exact form of this relation is not well understood. Wemore » implement an idealized molten zone and temperature-dependent grain boundary mobility in a kinetic Monte Carlo model to predict three-dimensional grain structure in additively manufactured metals. In order to demonstrate the flexibility of the model, synthetic microstructures are generated under conditions mimicking relatively diverse experimental results present in the literature. Simulated microstructures are then qualitatively and quantitatively compared to their experimental complements and are shown to be in good agreement.« less

  3. BigDataScript: a scripting language for data pipelines

    PubMed Central

    Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu

    2015-01-01

    Motivation: The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. Results: We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. Availability and implementation: BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. Contact: pablo.e.cingolani@gmail.com PMID:25189778

  4. Operation Windshield and the simplification of emergency management.

    PubMed

    Andrews, Michael

    2016-01-01

    Large, complex, multi-stakeholder exercises are the culmination of years of gradual progression through a comprehensive training and exercise programme. Exercises intended to validate training, refine procedures and test processes initially tested in isolation are combined to ensure seamless response and coordination during actual crises. The challenges of integrating timely and accurate situational awareness from an array of sources, including response agencies, municipal departments, partner agencies and the public, on an ever-growing range of media platforms, increase information management complexity in emergencies. Considering that many municipal emergency operations centre roles are filled by staff whose day jobs have little to do with crisis management, there is a need to simplify emergency management and make it more intuitive. North Shore Emergency Management has accepted the challenge of making emergency management less onerous to occasional practitioners through a series of initiatives aimed to build competence and confidence by making processes easier to use as well as by introducing technical tools that can simplify processes and enhance efficiencies. These efforts culminated in the full-scale earthquake exercise, Operation Windshield, which preceded the 2015 Emergency Preparedness and Business Continuity Conference in Vancouver, British Columbia.

  5. Environmental hazard mapping using GIS and AHP - A case study of Dong Trieu District in Quang Ninh Province, Vietnam

    NASA Astrophysics Data System (ADS)

    Anh, N. K.; Phonekeo, V.; My, V. C.; Duong, N. D.; Dat, P. T.

    2014-02-01

    In recent years, Vietnamese economy has been growing up rapidly and caused serious environmental quality plunging, especially in industrial and mining areas. It brings an enormous threat to a socially sustainable development and the health of human beings. Environmental quality assessment and protection are complex and dynamic processes, since it involves spatial information from multi-sector, multi-region and multi-field sources and needs complicated data processing. Therefore, an effective environmental protection information system is needed, in which considerable factors hidden in the complex relationships will become clear and visible. In this paper, the authors present the methodology which was used to generate environmental hazard maps which are applied to the integration of Analytic Hierarchy Process (AHP) and Geographical Information system (GIS). We demonstrate the results that were obtained from the study area in Dong Trieu district. This research study has contributed an overall perspective of environmental quality and identified the devastated areas where the administration urgently needs to establish an appropriate policy to improve and protect the environment.

  6. Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation

    PubMed Central

    Oliva, Aude

    2017-01-01

    Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630

  7. Characterization of the NTPR and BD1 interacting domains of the human PICH-BEND3 complex.

    PubMed

    Pitchai, Ganesha P; Hickson, Ian D; Streicher, Werner; Montoya, Guillermo; Mesa, Pablo

    2016-08-01

    Chromosome integrity depends on DNA structure-specific processing complexes that resolve DNA entanglement between sister chromatids. If left unresolved, these entanglements can generate either chromatin bridging or ultrafine DNA bridging in the anaphase of mitosis. These bridge structures are defined by the presence of the PICH protein, which interacts with the BEND3 protein in mitosis. To obtain structural insights into PICH-BEND3 complex formation at the atomic level, their respective NTPR and BD1 domains were cloned, overexpressed and crystallized using 1.56 M ammonium sulfate as a precipitant at pH 7.0. The protein complex readily formed large hexagonal crystals belonging to space group P6122, with unit-cell parameters a = b = 47.28, c = 431.58 Å and with one heterodimer in the asymmetric unit. A complete multiwavelength anomalous dispersion (MAD) data set extending to 2.2 Å resolution was collected from a selenomethionine-labelled crystal at the Swiss Light Source.

  8. Rare earth element abundances in rocks and minerals from the Fiskenaesset Complex, West Greenland. [comparison with lunar anorthosites

    NASA Technical Reports Server (NTRS)

    Henderson, P.; Fishlock, S. J.; Laul, J. C.; Cooper, T. D.; Conard, R. L.; Boynton, W. V.; Schmitt, R. A.

    1976-01-01

    The paper reports activation-analysis determinations of rare-earth-element (REE) and other trace-element concentrations in selected rocks, plagioclase, and mafic separates from the Fiskenaesset Complex. The REE abundances are found to be very low and atypical in comparison with other terrestrial anorthosites. The plagioclases are shown to be characterized by a deficiency in heavy RE elements relative to light ones and a positive Eu anomaly, while the mafic separates are enriched in heavy rare earths and have no Eu anomaly, except in one sample. It is found that the bulk and trace-element abundances of the plagioclases are similar to those observed in some lunar anorthosites, but the degree of Eu anomaly is less in the plagioclases. The data are taken as confirmation of the idea that fractionation processes were involved in the origin of the Complex, and it is concluded that the Complex may have been produced from a magma generated by partial melting of a garnet-bearing source.

  9. TomoMiner and TomoMinerCloud: A software platform for large-scale subtomogram structural analysis

    PubMed Central

    Frazier, Zachary; Xu, Min; Alber, Frank

    2017-01-01

    SUMMARY Cryo-electron tomography (cryoET) captures the 3D electron density distribution of macromolecular complexes in close to native state. With the rapid advance of cryoET acquisition technologies, it is possible to generate large numbers (>100,000) of subtomograms, each containing a macromolecular complex. Often, these subtomograms represent a heterogeneous sample due to variations in structure and composition of a complex in situ form or because particles are a mixture of different complexes. In this case subtomograms must be classified. However, classification of large numbers of subtomograms is a time-intensive task and often a limiting bottleneck. This paper introduces an open source software platform, TomoMiner, for large-scale subtomogram classification, template matching, subtomogram averaging, and alignment. Its scalable and robust parallel processing allows efficient classification of tens to hundreds of thousands of subtomograms. Additionally, TomoMiner provides a pre-configured TomoMinerCloud computing service permitting users without sufficient computing resources instant access to TomoMiners high-performance features. PMID:28552576

  10. Ammonia formation by a thiolate-bridged diiron amide complex as a nitrogenase mimic

    NASA Astrophysics Data System (ADS)

    Li, Yang; Li, Ying; Wang, Baomin; Luo, Yi; Yang, Dawei; Tong, Peng; Zhao, Jinfeng; Luo, Lun; Zhou, Yuhan; Chen, Si; Cheng, Fang; Qu, Jingping

    2013-04-01

    Although nitrogenase enzymes routinely convert molecular nitrogen into ammonia under ambient temperature and pressure, this reaction is currently carried out industrially using the Haber-Bosch process, which requires extreme temperatures and pressures to activate dinitrogen. Biological fixation occurs through dinitrogen and reduced NxHy species at multi-iron centres of compounds bearing sulfur ligands, but it is difficult to elucidate the mechanistic details and to obtain stable model intermediate complexes for further investigation. Metal-based synthetic models have been applied to reveal partial details, although most models involve a mononuclear system. Here, we report a diiron complex bridged by a bidentate thiolate ligand that can accommodate HN=NH. Following reductions and protonations, HN=NH is converted to NH3 through pivotal intermediate complexes bridged by N2H3- and NH2- species. Notably, the final ammonia release was effected with water as the proton source. Density functional theory calculations were carried out, and a pathway of biological nitrogen fixation is proposed.

  11. detectIR: a novel program for detecting perfect and imperfect inverted repeats using complex numbers and vector calculation.

    PubMed

    Ye, Congting; Ji, Guoli; Li, Lei; Liang, Chun

    2014-01-01

    Inverted repeats are present in abundance in both prokaryotic and eukaryotic genomes and can form DNA secondary structures--hairpins and cruciforms that are involved in many important biological processes. Bioinformatics tools for efficient and accurate detection of inverted repeats are desirable, because existing tools are often less accurate and time consuming, sometimes incapable of dealing with genome-scale input data. Here, we present a MATLAB-based program called detectIR for the perfect and imperfect inverted repeat detection that utilizes complex numbers and vector calculation and allows genome-scale data inputs. A novel algorithm is adopted in detectIR to convert the conventional sequence string comparison in inverted repeat detection into vector calculation of complex numbers, allowing non-complementary pairs (mismatches) in the pairing stem and a non-palindromic spacer (loop or gaps) in the middle of inverted repeats. Compared with existing popular tools, our program performs with significantly higher accuracy and efficiency. Using genome sequence data from HIV-1, Arabidopsis thaliana, Homo sapiens and Zea mays for comparison, detectIR can find lots of inverted repeats missed by existing tools whose outputs often contain many invalid cases. detectIR is open source and its source code is freely available at: https://sourceforge.net/projects/detectir.

  12. Updates in metabolomics tools and resources: 2014-2015.

    PubMed

    Misra, Biswapriya B; van der Hooft, Justin J J

    2016-01-01

    Data processing and interpretation represent the most challenging and time-consuming steps in high-throughput metabolomic experiments, regardless of the analytical platforms (MS or NMR spectroscopy based) used for data acquisition. Improved machinery in metabolomics generates increasingly complex datasets that create the need for more and better processing and analysis software and in silico approaches to understand the resulting data. However, a comprehensive source of information describing the utility of the most recently developed and released metabolomics resources--in the form of tools, software, and databases--is currently lacking. Thus, here we provide an overview of freely-available, and open-source, tools, algorithms, and frameworks to make both upcoming and established metabolomics researchers aware of the recent developments in an attempt to advance and facilitate data processing workflows in their metabolomics research. The major topics include tools and researches for data processing, data annotation, and data visualization in MS and NMR-based metabolomics. Most in this review described tools are dedicated to untargeted metabolomics workflows; however, some more specialist tools are described as well. All tools and resources described including their analytical and computational platform dependencies are summarized in an overview Table. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. The swiss army knife of job submission tools: grid-control

    NASA Astrophysics Data System (ADS)

    Stober, F.; Fischer, M.; Schleper, P.; Stadie, H.; Garbers, C.; Lange, J.; Kovalchuk, N.

    2017-10-01

    grid-control is a lightweight and highly portable open source submission tool that supports all common workflows in high energy physics (HEP). It has been used by a sizeable number of HEP analyses to process tasks that sometimes consist of up to 100k jobs. grid-control is built around a powerful plugin and configuration system, that allows users to easily specify all aspects of the desired workflow. Job submission to a wide range of local or remote batch systems or grid middleware is supported. Tasks can be conveniently specified through the parameter space that will be processed, which can consist of any number of variables and data sources with complex dependencies on each other. Dataset information is processed through a configurable pipeline of dataset filters, partition plugins and partition filters. The partition plugins can take the number of files, size of the work units, metadata or combinations thereof into account. All changes to the input datasets or variables are propagated through the processing pipeline and can transparently trigger adjustments to the parameter space and the job submission. While the core functionality is completely experiment independent, full integration with the CMS computing environment is provided by a small set of plugins.

  14. Aeolian process of the dried-up riverbeds of the Hexi Corridor, China: a wind tunnel experiment.

    PubMed

    Zhang, Caixia; Wang, Xunming; Dong, Zhibao; Hua, Ting

    2017-08-01

    Wind tunnel studies, which remain limited, are an important tool to understand the aeolian processes of dried-up riverbeds. The particle size, chemical composition, and the mineral contents of sediments arising from the dried river beds are poorly understood. Dried-up riverbeds cover a wide area in the Hexi Corridor, China, and comprise a complex synthesis of different land surfaces, including aeolian deposits, pavement surfaces, and Takyr crust. The results of the present wind tunnel experiment suggest that aeolian transport from the dried-up riverbeds of the Hexi Corridor ranges from 0 to 177.04 g/m 2 /min and that dry riverbeds could be one of the main sources of dust emissions in this region. As soon as the wind velocity reaches 16 m/s and assuming that there are abundant source materials available, aeolian transport intensity increases rapidly. The dried-up riverbed sediment and the associated aeolian transported material were composed mainly of fine and medium sands. However, the transported samples were coarser than the bed samples, because of the sorting effect of the aeolian processes on the sediment. The aeolian processes also led to regional elemental migration and mineral composition variations.

  15. The International Safety Framework for nuclear power source applications in outer space-Useful and substantial guidance

    NASA Astrophysics Data System (ADS)

    Summerer, L.; Wilcox, R. E.; Bechtel, R.; Harbison, S.

    2015-06-01

    In 2009, the International Safety Framework for Nuclear Power Source Applications in Outer Space was adopted, following a multi-year process that involved all major space faring nations under the auspices of a partnership between the UN Committee on the Peaceful Uses of Outer Space and the International Atomic Energy Agency. The Safety Framework reflects an international consensus on best practices to achieve safety. Following the 1992 UN Principles Relevant to the Use of Nuclear Power Sources in Outer Space, it is the second attempt by the international community to draft guidance promoting the safety of applications of nuclear power sources in space missions. NPS applications in space have unique safety considerations compared with terrestrial applications. Mission launch and outer space operational requirements impose size, mass and other space environment limitations not present for many terrestrial nuclear facilities. Potential accident conditions could expose nuclear power sources to extreme physical conditions. The Safety Framework is structured to provide guidance for both the programmatic and technical aspects of safety. In addition to sections containing specific guidance for governments and for management, it contains technical guidance pertinent to the design, development and all mission phases of space NPS applications. All sections of the Safety Framework contain elements directly relevant to engineers and space mission designers for missions involving space nuclear power sources. The challenge for organisations and engineers involved in the design and development processes of space nuclear power sources and applications is to implement the guidance provided in the Safety Framework by integrating it into the existing standard space mission infrastructure of design, development and operational requirements, practices and processes. This adds complexity to the standard space mission and launch approval processes. The Safety Framework is deliberately generic to remain relevantly independent of technological progress, of national organisational setups and of space mission types. Implementing its guidance therefore leaves room for interpretation and adaptation. Relying on reported practices, we analyse the guidance particularly relevant to engineers and space mission designers.

  16. Differential roles of NADPH oxidases in vascular physiology and pathophysiology

    PubMed Central

    Amanso, Angelica M.; Griendling, Kathy K.

    2012-01-01

    Reactive oxygen species (ROS) are produced by all vascular cells and regulate the major physiological functions of the vasculature. Production and removal of ROS are tightly controlled and occur in discrete subcellular locations, allowing for specific, compartmentalized signaling. Among the many sources of ROS in the vessel wall, NADPH oxidases are implicated in physiological functions such as control of vasomotor tone, regulation of extracellular matrix and phenotypic modulation of vascular smooth muscle cells. They are involved in the response to injury, whether as an oxygen sensor during hypoxia, as a regulator of protein processing, as an angiogenic stimulus, or as a mechanism of wound healing. These enzymes have also been linked to processes leading to disease development, including migration, proliferation, hypertrophy, apoptosis and autophagy. As a result, NADPH oxidases participate in atherogenesis, systemic and pulmonary hypertension and diabetic vascular disease. The role of ROS in each of these processes and diseases is complex, and a more full understanding of the sources, targets, cell-specific responses and counterbalancing mechanisms is critical for the rational development of future therapeutics. PMID:22202108

  17. A demonstration of real-time connected element interferometry for spacecraft navigation

    NASA Technical Reports Server (NTRS)

    Edwards, C.; Rogstad, D.; Fort, D.; White, L.; Iijima, B.

    1992-01-01

    Connected element interferometry is a technique of observing a celestial radio source at two spatially separated antennas, and then interfering the received signals to extract the relative phase of the signal at the two antennas. The high precision of the resulting phase delay data type can provide an accurate determination of the angular position of the radio source relative to the baseline vector between the two stations. A connected element interferometer on a 21-km baseline between two antennas at the Deep Space Network's Goldstone, CA tracking complex is developed. Fiber optic links are used to transmit the data at 112 Mbit/sec to a common site for processing. A real-time correlator to process these data in real-time is implemented. The architecture of the system is described, and observational data is presented to characterize the potential performance of such a system. The real-time processing capability offers potential advantages in terms of increased reliability and improved delivery of navigational data for time-critical operations. Angular accuracies of 50-100 nrad are achievable on this baseline.

  18. The goldstone real-time connected element interferometer

    NASA Technical Reports Server (NTRS)

    Edwards, C., Jr.; Rogstad, D.; Fort, D.; White, L.; Iijima, B.

    1992-01-01

    Connected element interferometry (CEI) is a technique of observing a celestial radio source at two spatially separated antennas and then interfering the received signals to extract the relative phase of the signal at the two antennas. The high precision of the resulting phase delay data type can provide an accurate determination of the angular position of the radio source relative to the baseline vector between the two stations. This article describes a recently developed connected element interferometer on a 21-km baseline between two antennas at the Deep Space Network's Goldstone, California, tracking complex. Fiber-optic links are used to transmit the data to a common site for processing. The system incorporates a real-time correlator to process these data in real time. The architecture of the system is described, and observational data are presented to characterize the potential performance of such a system. The real-time processing capability offers potential advantages in terms of increased reliability and improved delivery of navigational data for time-critical operations. Angular accuracies of 50-100 nrad are achievable on this baseline.

  19. Origin of sulfur for elemental sulfur concentration in salt dome cap rocks, Gulf Coast Basin, USA

    NASA Astrophysics Data System (ADS)

    Hill, J. M.; Kyle, R.; Loyd, S. J.

    2017-12-01

    Calcite cap rocks of the Boling and Main Pass salt domes contain large elemental sulfur accumulations. Isotopic and petrographic data indicate complex histories of cap rock paragenesis for both domes. Whereas paragenetic complexity is in part due to the open nature of these hydrodynamic systems, a comprehensive understanding of elemental sulfur sources and concentration mechanisms is lacking. Large ranges in traditional sulfur isotope compositions (δ34S) among oxidized and reduced sulfur-bearing phases has led some to infer that microbial sulfate reduction and/or influx of sulfide-rich formation waters occurred during calcite cap rock formation. Ultimately, traditional sulfur isotope analyses alone cannot distinguish among local microbial or exogenous sulfur sources. Recently, multiple sulfur isotope (32S, 33S, 34S, 36S) studies reveal small, but measurable differences in mass-dependent behavior of microbial and abiogenic processes. To distinguish between the proposed sulfur sources, multiple-sulfur-isotope analyses have been performed on native sulfur from the Boling and Main Pass cap rocks. Similarities or deviations from equilibrium relationships indicate which pathways were responsible for native sulfur precipitation. Pathway determination provides insight into Gulf Coast cap rock development and potentially highlights the conditions that led to anomalous sulfur enrichment in Boling and Main Pass Domes.

  20. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  1. Carbon isotopes of dissolved inorganic carbon reflect utilization of different carbon sources by microbial communities in two limestone aquifer assemblages

    NASA Astrophysics Data System (ADS)

    Nowak, Martin E.; Schwab, Valérie F.; Lazar, Cassandre S.; Behrendt, Thomas; Kohlhepp, Bernd; Totsche, Kai Uwe; Küsel, Kirsten; Trumbore, Susan E.

    2017-08-01

    Isotopes of dissolved inorganic carbon (DIC) are used to indicate both transit times and biogeochemical evolution of groundwaters. These signals can be complicated in carbonate aquifers, as both abiotic (i.e., carbonate equilibria) and biotic factors influence the δ13C and 14C of DIC. We applied a novel graphical method for tracking changes in the δ13C and 14C of DIC in two distinct aquifer complexes identified in the Hainich Critical Zone Exploratory (CZE), a platform to study how water transport links surface and shallow groundwaters in limestone and marlstone rocks in central Germany. For more quantitative estimates of contributions of different biotic and abiotic carbon sources to the DIC pool, we used the NETPATH geochemical modeling program, which accounts for changes in dissolved ions in addition to C isotopes. Although water residence times in the Hainich CZE aquifers based on hydrogeology are relatively short (years or less), DIC isotopes in the shallow, mostly anoxic, aquifer assemblage (HTU) were depleted in 14C compared to a deeper, oxic, aquifer complex (HTL). Carbon isotopes and chemical changes in the deeper HTL wells could be explained by interaction of recharge waters equilibrated with post-bomb 14C sources with carbonates. However, oxygen depletion and δ13C and 14C values of DIC below those expected from the processes of carbonate equilibrium alone indicate considerably different biogeochemical evolution of waters in the upper aquifer assemblage (HTU wells). Changes in 14C and 13C in the upper aquifer complexes result from a number of biotic and abiotic processes, including oxidation of 14C-depleted OM derived from recycled microbial carbon and sedimentary organic matter as well as water-rock interactions. The microbial pathways inferred from DIC isotope shifts and changes in water chemistry in the HTU wells were supported by comparison with in situ microbial community structure based on 16S rRNA analyses. Our findings demonstrate the large variation in the importance of biotic as well as abiotic controls on 13C and 14C of DIC in closely related aquifer assemblages. Further, they support the importance of subsurface-derived carbon sources like DIC for chemolithoautotrophic microorganisms as well as rock-derived organic matter for supporting heterotrophic groundwater microbial communities and indicate that even shallow aquifers have microbial communities that use a variety of subsurface-derived carbon sources.

  2. Two scales of inflation at Lastarria-Cordon del Azufre volcanic complex, central Andes, revealed from ASAR-ENVISAT interferometric data

    NASA Astrophysics Data System (ADS)

    Froger, J.-L.; Remy, D.; Bonvalot, S.; Legrand, D.

    2007-03-01

    ASAR-ENVISAT Interferometric Synthetic Aperture Radar (InSAR) data collected over the Lastarria-Cordon del Azufre complex (Chile-Argentina) between March 2003 and May 2005 show the persistence of the large wavelength ground inflation revealed by Pritchard and Simons in 2002 from the analysis of ERS InSAR data [Nature 418 (2002) 167-170]. After reducing the tropospheric contribution in the interferograms using a combination of data network adjustment and analysis of MODIS images, we produced an accurate interferometric time series showing a 2 yr long temporal evolution of the ground displacements patterns. Two distinct inflating signals are detected. The main signal covers an elliptical area with a 45 km NNE-SSW major axis and a 37 km minor axis. It is correlated with a regional topographic dome. We estimated its maximum inflation rate to ˜ 2.5 cm yr - 1 . We inverted the InSAR data for a range of source geometries (spherical, prolate ellipsoids, penny-shaped cracks). The inferred source parameters for 2003-2005 period are consistent with an over-pressured reservoir at shallow to intermediate crustal depths (7-15 km), with an average volumetric rate of inflation of about 14 × 10 6 m 3 yr - 1 . In addition to this main signal a new feature highlighted by the ASAR data is short wavelength inflation (6 km wide) at the location of Lastarria volcano on the northern margin of the large wavelength signal. We explain this short wavelength signal by a spherical over-pressured source lying 1000 m below the summit of Lastarria volcano. We estimate the average volumetric rate of inflation during the observation period to be ˜ 35 × 10 3 m 3 yr - 1 . It is remarkable that both volumetric variations for the large and small inflations exhibit the same evolution during the 2003-2005 period, suggesting that both processes could be related. On the basis of the inversion results and of arguments provided by field evidences and a morpho-structural analysis of the Digital Elevation Model of the area, we propose that the deep source have a magmatic origin while the shallow source is most likely related to hydrothermal fluids. In our interpretation, the on-going deformation processes observed at Lastarria-Cordon del Azufre volcanic complex could represent an evolving pre-caldera silicic system. Further field geological and geophysical investigations will be required to confirm these hypotheses and refine the proposed model, mostly based on satellite observations.

  3. Isolation and characterization of a hydrocarbonoclastic bacterial enrichment from total petroleum hydrocarbon contaminated sediments: potential candidates for bioaugmentation in bio-based processes.

    PubMed

    Di Gregorio, Simona; Siracusa, Giovanna; Becarelli, Simone; Mariotti, Lorenzo; Gentini, Alessandro; Lorenzi, Roberto

    2016-06-01

    Seven hydrocarbonoclastic new bacterial isolates were isolated from dredged sediments of a river estuary in Italy. The sediments were contaminated by shipyard activities since decades, mainly ascribable to the exploitation of diesel oil as the fuel for recreational and commercial navigation of watercrafts. The bacterial isolates were able to utilize diesel oil as sole carbon source. Their metabolic capacities were evaluated by GC-MS analysis, with reference to the depletion of both the normal and branched alkanes, the nC18 fatty acid methyl ester and the unresolved complex mixture of organic compounds. They were taxonomically identified as different species of Stenotrophomonas and Pseudomonas spp. by the combination of amplified ribosomal DNA restriction analysis (ARDRA) and repetitive sequence-based PCR (REP-PCR) analysis. The metabolic activities of interest were analyzed both in relation to the single bacterial strains and to the combination of the latter as a multibacterial species system. After 6 days of incubation in mineral medium with diesel oil as sole carbon source, the Stenotrophomonas sp. M1 strain depleted 43-46 % of Cn-alkane from C28 up to C30, 70 % of the nC18 fatty acid methyl ester and the 46 % of the unresolved complex mixture of organic compounds. On the other hand, the Pseudomonas sp. NM1 strain depleted the 76 % of the nC18 fatty acid methyl ester, the 50 % of the unresolved complex mixture of organic compounds. The bacterial multispecies system was able to completely deplete Cn-alkane from C28 up to C30 and to deplete the 95 % of the unresolved complex mixture of organic compounds. The isolates, either as single strains and as a bacterial multispecies system, were proposed as candidates for bioaugmentation in bio-based processes for the decontamination of dredged sediments.

  4. Synthetic iron complexes as models for natural iron-humic compounds: Synthesis, characterization and algal growth experiments.

    PubMed

    Orlowska, Ewelina; Roller, Alexander; Pignitter, Marc; Jirsa, Franz; Krachler, Regina; Kandioller, Wolfgang; Keppler, Bernhard K

    2017-01-15

    A series of monomeric and dimeric Fe III complexes with O,O-; O,N-; O,S-coordination motifs has been prepared and characterized by standard analytical methods in order to elucidate their potential to act as model compounds for aquatic humic acids. Due to the postulated reduction of iron in humic acids and following uptake by microorganisms, the redox behavior of the models was investigated with cyclic voltammetry. Most of the investigated compounds showed iron reduction potentials accessible to biological reducing agents. Additionally, observed reduction processes were predominantly irreversible, suggesting that subsequent reactions can take place after reduction of the iron center. Also the stability of the synthesized complexes in pure water and artificial seawater was monitored from 24h up to 21days by means of UV-Vis spectrometry. Several complexes remained stable even after 21days, showing only partially precipitation but some of them showed changes in UV-Vis spectra already after 24h which were connected to protonation/deprotonation processes as well as redox processes and degradation of the complexes. The ability to act as an iron source for primary producers was tested in algal growth experiments with two marine algae species Chlorella salina and Prymnesium parvum. Some of the compounds showed effects on the algal cultures, which are comparable with natural humic acids and better as for the samples kept under ideal conditions. Those findings help to understand which functional groups of humic acids could be responsible for the reversible iron binding and transport in aquatic humic substances. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Potential sources of nitrous acid (HONO) and their impacts on ozone: A WRF-Chem study in a polluted subtropical region

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Wang, Tao; Zhang, Qiang; Zheng, Junyu; Xu, Zheng; Lv, Mengyao

    2016-04-01

    Current chemical transport models commonly undersimulate the atmospheric concentration of nitrous acid (HONO), which plays an important role in atmospheric chemistry, due to the lack or inappropriate representations of some sources in the models. In the present study, we parameterized up-to-date HONO sources into a state-of-the-art three-dimensional chemical transport model (Weather Research and Forecasting model coupled with Chemistry: WRF-Chem). These sources included (1) heterogeneous reactions on ground surfaces with the photoenhanced effect on HONO production, (2) photoenhanced reactions on aerosol surfaces, (3) direct vehicle and vessel emissions, (4) potential conversion of NO2 at the ocean surface, and (5) emissions from soil bacteria. The revised WRF-Chem was applied to explore the sources of the high HONO concentrations (0.45-2.71 ppb) observed at a suburban site located within complex land types (with artificial land covers, ocean, and forests) in Hong Kong. With the addition of these sources, the revised model substantially reproduced the observed HONO levels. The heterogeneous conversions of NO2 on ground surfaces dominated HONO sources contributing about 42% to the observed HONO mixing ratios, with emissions from soil bacterial contributing around 29%, followed by the oceanic source (~9%), photochemical formation via NO and OH (~6%), conversion on aerosol surfaces (~3%), and traffic emissions (~2%). The results suggest that HONO sources in suburban areas could be more complex and diverse than those in urban or rural areas and that the bacterial and/or ocean processes need to be considered in HONO production in forested and/or coastal areas. Sensitivity tests showed that the simulated HONO was sensitive to the uptake coefficient of NO2 on the surfaces. Incorporation of the aforementioned HONO sources significantly improved the simulations of ozone, resulting in increases of ground-level ozone concentrations by 6-12% over urban areas in Hong Kong and the Pearl River Delta region. This result highlights the importance of accurately representing HONO sources in simulations of secondary pollutants over polluted regions.

  6. Interstellar Ice Chemistry: From Water to Complex Organics

    NASA Astrophysics Data System (ADS)

    Oberg, Karin I.; Fayolle, E.; Linnartz, H.; van Dishoeck, E.; Fillion, J.; Bertin, M.

    2013-06-01

    Molecular cloud cores, protostellar envelopes and protoplanetary disk midplanes are all characterized by freeze-out of atoms and molecules (other than H and H2) onto interstellar dust grains. On the grain surface, atom addition reactions, especially hydrogenation, are efficient and H2O forms readily from O, CH3OH from CO etc. The result is an icy mantle typically dominated by H2O, but also rich in CO2, CO, NH3, CH3OH and CH4. These ices are further processed through interactions with radiation, electrons and energetic particles. Because of the efficiency of the freeze-out process, and the complex chemistry that succeeds it, these icy grain mantles constitute a major reservoir of volatiles during star formation and are also the source of much of the chemical evolution observed in star forming regions. Laboratory experiments allow us to explore how molecules and radicals desorb, dissociate, diffuse and react in ices when exposed to different sources of energy. Changes in ice composition and structure is constrained using infrared spectroscopy and mass spectrometry. By comparing ice desorption, segregation, and chemistry efficiencies under different experimental conditions, we can characterize the basic ice processes, e.g. diffusion of different species, that underpin the observable changes in ice composition and structure. This information can then be used to predict the interstellar ice chemical evolution. I will review some of the key laboratory discoveries on ice chemistry during the past few years and how they have been used to predict and interpret astronomical observations of ice bands and gas-phase molecules associated with ice evaporation. These include measurements of thermal diffusion in and evaporation from ice mixtures, non-thermal diffusion efficiencies (including the recent results on frequency resolved UV photodesorption), and the expected temperature dependencies of the complex ice chemistry regulated by radical formation and diffusion. Based on these examples I will argue that the combination of laboratory experiments and observations is crucial to formulate and to test hypotheses on key processes that regulate the interstellar ice chemistry.

  7. Balancing macronutrient stoichiometry to alleviate eutrophication.

    PubMed

    Stutter, M I; Graeber, D; Evans, C D; Wade, A J; Withers, P J A

    2018-09-01

    Reactive nitrogen (N) and phosphorus (P) inputs to surface waters modify aquatic environments, affect public health and recreation. Source controls dominate eutrophication management, whilst biological regulation of nutrients is largely neglected, although aquatic microbial organisms have huge potential to process nutrients. The stoichiometric ratio of organic carbon (OC) to N to P atoms should modulate heterotrophic pathways of aquatic nutrient processing, as high OC availability favours aquatic microbial processing. Heterotrophic microbial processing removes N by denitrification and captures N and P as organically-complexed, less eutrophying forms. With a global data synthesis, we show that the atomic ratios of bioavailable dissolved OC to either N or P in rivers with urban and agricultural land use are often distant from a "microbial optimum". This OC-deficiency relative to high availabilities of N and P likely overwhelms within-river heterotrophic processing. We propose that the capability of streams and rivers to retain N and P may be improved by active stoichiometric rebalancing. Although autotrophic OC production contributes to heterotrophic rates substantial control on nutrient processing from allochthonous OC is documented for N and an emerging field for P. Hence, rebalancing should be done by reconnecting appropriate OC sources such as wetlands and riparian forests that have become disconnected from rivers concurrent with agriculture and urbanisation. However, key knowledge gaps require research prior to the safe implementation of this approach in management: (i) to evaluate system responses to catchment inputs of dissolved OC forms and amounts relative to internal production of autotrophic dissolved OC and aquatic and terrestrial particulate OC and (ii) evaluate risk factors in anoxia-mediated P desorption with elevated OC scenarios. Still, we find stoichiometric rebalancing through reconnecting landscape beneficial OC sources has considerable potential for river management to alleviate eutrophication, improve water quality and aquatic ecosystem health, if augmenting nutrient source control. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. A power-efficient communication system between brain-implantable devices and external computers.

    PubMed

    Yao, Ning; Lee, Heung-No; Chang, Cheng-Chun; Sclabassi, Robert J; Sun, Mingui

    2007-01-01

    In this paper, we propose a power efficient communication system for linking a brain-implantable device to an external system. For battery powered implantable devices, the processor and the transmitter power should be reduced in order to both conserve battery power and reduce the health risks associated with transmission. To accomplish this, a joint source-channel coding/decoding system is devised. Low-density generator matrix (LDGM) codes are used in our system due to their low encoding complexity. The power cost for signal processing within the implantable device is greatly reduced by avoiding explicit source encoding. Raw data which is highly correlated is transmitted. At the receiver, a Markov chain source correlation model is utilized to approximate and capture the correlation of raw data. A turbo iterative receiver algorithm is designed which connects the Markov chain source model to the LDGM decoder in a turbo-iterative way. Simulation results show that the proposed system can save up to 1 to 2.5 dB on transmission power.

  9. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  10. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  11. Boninite-like intraplate magmas from Manihiki Plateau require ultra-depleted and enriched source components

    PubMed Central

    Golowin, Roman; Portnyagin, Maxim; Hoernle, Kaj; Hauff, Folkmar; Gurenko, Andrey; Garbe-Schönberg, Dieter; Werner, Reinhard; Turner, Simon

    2017-01-01

    The Ontong Java and Manihiki oceanic plateaus are believed to have formed through high-degree melting of a mantle plume head. Boninite-like, low-Ti basement rocks at Manihiki, however, imply a more complex magma genesis compared with Ontong Java basement lavas that can be generated by ∼30% melting of a primitive mantle source. Here we show that the trace element and isotope compositions of low-Ti Manihiki rocks can best be explained by re-melting of an ultra-depleted source (possibly a common mantle component in the Ontong Java and Manihiki plume sources) re-enriched by ≤1% of an ocean-island-basalt-like melt component. Unlike boninites formed via hydrous flux melting of refractory mantle at subduction zones, these boninite-like intraplate rocks formed through adiabatic decompression melting of refractory plume material that has been metasomatized by ocean-island-basalt-like melts. Our results suggest that caution is required before assuming all Archaean boninites were formed in association with subduction processes. PMID:28181497

  12. Surface production dominating Cs-free H- ion source for high intensity and high energy proton accelerators

    NASA Astrophysics Data System (ADS)

    Ueno, Akira; Ikegami, Kiyoshi; Kondo, Yasuhiro

    2004-05-01

    A Cs-free negative hydrogen (H-) ion source driven by pulsed arc plasma with a LaB6 filament is being operated for the beam tests of the Japan Proton Accelerator Research Complex (J-PARC) linac. A peak H- current of 38 mA, which exceeds the requirement of the J-PARC first stage, is stably extracted from the ion source with a beam duty factor of 0.9% (360 μs×25 Hz) by principally optimizing the surface condition and shape of the plasma electrode. The sufficiently small emittance of the beam was confirmed by high transmission efficiency (around 90%) through the following 324 MHz 3 MeV J-PARC radio frequency quadrupole linac (M. Ikegami et al., Proc. 2003 Part. Accel. Conf. 2003, p. 1509). The process of the optimization, which confirms the validity of hypothesis that H- ions are produced by surface reaction on a Mo plasma electrode dominantly in the ion source, is presented.

  13. Psychophysical evidence for auditory motion parallax.

    PubMed

    Genzel, Daria; Schutte, Michael; Brimijoin, W Owen; MacNeilage, Paul R; Wiegrebe, Lutz

    2018-04-17

    Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.

  14. Indistinguishable and efficient single photons from a quantum dot in a planar nanobeam waveguide

    NASA Astrophysics Data System (ADS)

    KiršanskÄ--, Gabija; Thyrrestrup, Henri; Daveau, Raphaël S.; Dreeßen, Chris L.; Pregnolato, Tommaso; Midolo, Leonardo; Tighineanu, Petru; Javadi, Alisa; Stobbe, Søren; Schott, Rüdiger; Ludwig, Arne; Wieck, Andreas D.; Park, Suk In; Song, Jin D.; Kuhlmann, Andreas V.; Söllner, Immo; Löbl, Matthias C.; Warburton, Richard J.; Lodahl, Peter

    2017-10-01

    We demonstrate a high-purity source of indistinguishable single photons using a quantum dot embedded in a nanophotonic waveguide. The source features a near-unity internal coupling efficiency and the collected photons are efficiently coupled off chip by implementing a taper that adiabatically couples the photons to an optical fiber. By quasiresonant excitation of the quantum dot, we measure a single-photon purity larger than 99.4 % and a photon indistinguishability of up to 94 ±1 % by using p -shell excitation combined with spectral filtering to reduce photon jitter. A temperature-dependent study allows pinpointing the residual decoherence processes, notably the effect of phonon broadening. Strict resonant excitation is implemented as well as another means of suppressing photon jitter, and the additional complexity of suppressing the excitation laser source is addressed. The paper opens a clear pathway towards the long-standing goal of a fully deterministic source of indistinguishable photons, which is integrated on a planar photonic chip.

  15. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action.

    PubMed

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan

    2018-01-01

    Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works.

  16. Traceability of On-Machine Tool Measurement: A Review

    PubMed Central

    Gomez-Acedo, Eneko; Kortaberria, Gorka; Olarra, Aitor

    2017-01-01

    Nowadays, errors during the manufacturing process of high value components are not acceptable in driving industries such as energy and transportation. Sectors such as aerospace, automotive, shipbuilding, nuclear power, large science facilities or wind power need complex and accurate components that demand close measurements and fast feedback into their manufacturing processes. New measuring technologies are already available in machine tools, including integrated touch probes and fast interface capabilities. They provide the possibility to measure the workpiece in-machine during or after its manufacture, maintaining the original setup of the workpiece and avoiding the manufacturing process from being interrupted to transport the workpiece to a measuring position. However, the traceability of the measurement process on a machine tool is not ensured yet and measurement data is still not fully reliable enough for process control or product validation. The scientific objective is to determine the uncertainty on a machine tool measurement and, therefore, convert it into a machine integrated traceable measuring process. For that purpose, an error budget should consider error sources such as the machine tools, components under measurement and the interactions between both of them. This paper reviews all those uncertainty sources, being mainly focused on those related to the machine tool, either on the process of geometric error assessment of the machine or on the technology employed to probe the measurand. PMID:28696358

  17. The decisions regarding ADHD management (DRAMa) study: uncertainties and complexities in assessment, diagnosis and treatment, from the clinician's point of view.

    PubMed

    Kovshoff, Hanna; Williams, Sarah; Vrijens, May; Danckaerts, Marina; Thompson, Margaret; Yardley, Lucy; Hodgkins, Paul; Sonuga-Barke, Edmund J S

    2012-02-01

    Clinical decision making is influenced by a range of factors and constitutes an inherently complex task. Here we present results from the decisions regarding ADHD management (DRAMa) study in which we undertook a thematic analysis of clinicians' experiences and attitudes to assessment, diagnosis and treatment of ADHD. Fifty prescribing child psychiatrists and paediatricians from Belgium and the UK took part in semi-structured interviews about their decisions regarding the assessment, diagnosis and treatment of ADHD. Interviews were transcribed and processed using thematic analysis and the principles of grounded theory. Clinicians described the assessment and diagnostic process as inherently complicated and requiring time and experience to piece together the accounts of children made by multiple sources and through the use of varying information gathering techniques. Treatment decisions were viewed as a shared process between families, children, and the clinician. Published guidelines were viewed as vague, and few clinicians spoke about the use of symptom thresholds or specific impairment criteria. Furthermore, systematic or operationalised criteria to assess treatment outcomes were rarely used. Decision making in ADHD is regarded as a complicated, time consuming process which requires extensive use of clinical impression, and involves a partnership with parents. Clinicians want to separate biological from environmental causal factors to understand the level of impairment and the subsequent need for a diagnosis of ADHD. Clinical guidelines would benefit from revisions to take into account the real-world complexities of clinical decision making for ADHD.

  18. Spectral convergence in tapping and physiological fluctuations: coupling and independence of 1/f noise in the central and autonomic nervous systems

    PubMed Central

    Rigoli, Lillian M.; Holman, Daniel; Spivey, Michael J.; Kello, Christopher T.

    2014-01-01

    When humans perform a response task or timing task repeatedly, fluctuations in measures of timing from one action to the next exhibit long-range correlations known as 1/f noise. The origins of 1/f noise in timing have been debated for over 20 years, with one common explanation serving as a default: humans are composed of physiological processes throughout the brain and body that operate over a wide range of timescales, and these processes combine to be expressed as a general source of 1/f noise. To test this explanation, the present study investigated the coupling vs. independence of 1/f noise in timing deviations, key-press durations, pupil dilations, and heartbeat intervals while tapping to an audiovisual metronome. All four dependent measures exhibited clear 1/f noise, regardless of whether tapping was synchronized or syncopated. 1/f spectra for timing deviations were found to match those for key-press durations on an individual basis, and 1/f spectra for pupil dilations matched those in heartbeat intervals. Results indicate a complex, multiscale relationship among 1/f noises arising from common sources, such as those arising from timing functions vs. those arising from autonomic nervous system (ANS) functions. Results also provide further evidence against the default hypothesis that 1/f noise in human timing is just the additive combination of processes throughout the brain and body. Our findings are better accommodated by theories of complexity matching that begin to formalize multiscale coordination as a foundation of human behavior. PMID:25309389

  19. The emission characteristics and the related malodor intensities of gaseous reduced sulfur compounds (RSC) in a large industrial complex

    NASA Astrophysics Data System (ADS)

    Kim, Ki-Hyun; Jeon, Eui-Chan; Choi, Ye-Jin; Koo, Youn-Seo

    In this study, the concentrations of major reduced sulfur compounds (RSC: H 2S, CH 3SH, DMS, CS 2 and DMDS) were determined from various emission sources located within the Ban-Wall (BW)/ Si-Hwa (SH) industrial complex in Ansan city, Korea. The measurement data were obtained from a total of 202 individual points at 77 individual companies during 2004-2005. The highest RSC concentration levels came most dominantly from H 2S (300 (mean) and 0.86 ppb (median)) followed by CS 2, while the results of CH 3, DMS, and DMDS are notably lower at the mean concentration levels of a few ppb. These data were evaluated further after being grouped into two different classification schemes: 9 industry sectors and 9 processing unit types. The strongest emissions of RSC, when evaluated among different industry sectors, are generally found from such industry types as leather, food, paper/pulp, as well as waste/sewage related ones. In contrast, when these RSC data are compared across different processing units, the highest values were seen most frequently from such units as junction boxes, aeration tanks, and settling tanks. The assessment of data in terms of relative contribution to malodor intensity showed that H 2S and CH 3SH are more important than others. The overall results of the present study suggest that information combining RSC speciation and types of anthropogenic activities may be used to distinguish the patterns of odorous pollution in areas affected by strong source processes.

  20. A Survey of Current Russion RTG Capabilities

    NASA Technical Reports Server (NTRS)

    Chmielewski, A.; Borshchevsky, A.; Lange, R.; Cook, B.

    1994-01-01

    Supplying radioisotope thermoelectric generators (RTG) to American space missions became very complex. The process is marred by many obstacles: high cost, lack of new developments, difficult launch approval and NEPA compliance. At the same time there are many ambitious space missions for which an RTG would indisputably be the lightest, smallest and most robust power source. An American delegation investigated status of RTG production in Russia to decide if our product line could be supplemented by the Russian designs.

Top