Sample records for winkler process

  1. Pitfall Traps and Mini-Winkler Extractor as Complementary Methods to Sample Soil Coleoptera.

    PubMed

    Carneiro, A C; Batistella, D A; Battirola, L D; Marques, M I

    2016-02-01

    We compared abundance, species richness, and capture efficiency with pitfall traps and mini-Winkler extractors to examine their use as complementary methods for sampling soil Coleoptera during dry (2010) and high water seasons (2011) in three areas, including inundated and non-inundated regions, in the Pantanal of Poconé, Mato Grosso, Brazil. We paired treatments with two 10 × 10 m plots in inundated and non-inundated locations that were repeated three times in each location for a total of 18 plots. In each plot, we used nine pitfall traps and collected 2 m(2) of leaf litter and surface soil samples with mini-Winkler extractors. We collected a total of 4260 adult beetles comprising 36 families, 113 genera, and 505 species. Most were caught in pitfalls (69%) and the remainder in the mini-Winkler extractors (31%). Each method provided distinct information about the beetle community: 252 species were captured only in pitfall traps, 147 using only the mini-Winkler extractors, and these methods shared another 106 species. Pitfall and mini-Winkler contribute in different ways for the sampling of the soil beetle community, and so they should be considered complementary for a more thorough assessment of community diversity.

  2. Wilhelm Winkler (1842-1910) - a Thuringian private astronomer and maecenas

    NASA Astrophysics Data System (ADS)

    Weise, Wilfried; Dorschner, Johann; Schielicke, Reinhardt E.

    Wilhelm Winkler was born in 1842 in Eisenberg, Thuringia, as the son of a lawyer. After attending the trading high school in Gera, Winkler worked as a merchant in Eisenberg, following in the footsteps of his grandfather. In 1875 he gave up this trade and devoted his time entirely to astronomy. Advised by Carl Bruhns, director of the Leipzig University Observatory, he established an observatory on his estate in Gohlis near Leipzig. From 1878 Winkler regularly observed sunspots; other fields of his observational interests were comets, occultations of stars by the Moon, and Jupiter's satellites. In 1887 he went to Jena, where he contacted Ernst Abbe, who was the head of the Jena observatory, too. For some years, Winkler's instruments were used in the new observatory erected by Abbe, which replaced the old Ducal Observatory of the Goethe era. Winkler donated the precision pendulum clock and some other instruments to this observatory. He also offered his observational assistance whenever it was wanted. In 1893 Winkler built up his own observatory in Jena and published annual reports on his work in the Vierteljahrsschrift of the Astronomische Gesellschaft. His observational results mainly appeared in the journal Astronomische Nachrichten. In 1902 he was awarded an honorary doctor's degree by the Philosophical Faculty of Jena University. However, at that time his physical constitution began gradually to fade. He lost his left eye due to a sarcoma, and finally he died at the age of 68. In his will, he left 100 000 Mark in form of securities to Jena University (Winkler Foundation). The University Observatory got his 4.5 m dome, the transport of which from his residence to the final site was also paid for by him, several instruments, and a lot of books. In 1936 Winkler's dome was closed by the University. The observatory was transferred from the University to the Zeiss works in exchange for the observatory in the Jena Forst. Zeiss sponsored the reconstruction of the old dome and its equipment with a telescope and, thus, laid the base for the modern Urania Popular Observatory. Please note: The printed version contains an error: Unfortunately, Reinhard E. Schielicke was not indicated as co-author of this paper.

  3. A Comparison of the Pitfall Trap, Winkler Extractor and Berlese Funnel for Sampling Ground-Dwelling Arthropods in Tropical Montane Cloud Forests

    PubMed Central

    Sabu, Thomas K.; Shiju, Raj T.; Vinod, KV.; Nithya, S.

    2011-01-01

    Little is known about the ground-dwelling arthropod diversity in tropical montane cloud forests (TMCF). Due to unique habitat conditions in TMCFs with continuously wet substrates and a waterlogged forest floor along with the innate biases of the pitfall trap, Berlese funnel and Winkler extractor are certain to make it difficult to choose the most appropriate method to sample the ground-dwelling arthropods in TMCFs. Among the three methods, the Winkler extractor was the most efficient method for quantitative data and pitfall trapping for qualitative data for most groups. Inclusion of floatation method as a complementary method along with the Winkler extractor would enable a comprehensive quantitative survey of ground-dwelling arthropods. Pitfall trapping is essential for both quantitative and qualitative sampling of Diplopoda, Opiliones, Orthoptera, and Diptera. The Winkler extractor was the best quantitative method for Psocoptera, Araneae, Isopoda, and Formicidae; and the Berlese funnel was best for Collembola and Chilopoda. For larval forms of different insect orders and the Acari, all the three methods were equally effective. PMID:21529148

  4. A multisyringe flow injection Winkler-based spectrophotometric analyzer for in-line monitoring of dissolved oxygen in seawater.

    PubMed

    Horstkotte, Burkhard; Alonso, Juan Carlos; Miró, Manuel; Cerdà, Víctor

    2010-01-15

    An integrated analyzer based on the multisyringe flow injection analysis approach is proposed for the automated determination of dissolved oxygen in seawater. The entire Winkler method including precipitation of manganese(II) hydroxide, fixation of dissolved oxygen, dissolution of the oxidized manganese hydroxide precipitate, and generation of iodine and tri-iodide ion are in-line effected within the flow network. Spectrophotometric quantification of iodine and tri-iodide at the isosbestic wavelength of 466nm renders enhanced method reliability. The calibration function is linear up to 19mgL(-1) dissolved oxygen and an injection frequency of 17 per hour is achieved. The multisyringe system features a highly satisfying signal stability with repeatabilities of 2.2% RSD that make it suitable for continuous determination of dissolved oxygen in seawater. Compared to the manual starch-end-point titrimetric Winkler method and early reported automated systems, concentrations and consumption of reagents and sample are reduced up to hundredfold. The versatility of the multisyringe assembly was exploited in the implementation of an ancillary automatic batch-wise Winkler titrator using a single syringe of the module for accurate titration of the released iodine/tri-iodide with thiosulfate.

  5. The correspondence between Winkler and Monakow during World War I.

    PubMed

    Koehler, Peter J; Jagella, Caroline

    2015-01-01

    The correspondence (1907-1930) between two leading European neurologists, Cornelis Winkler (1855-1941) and Constantin von Monakow (1853-1930), has been preserved in Amsterdam and Zurich. For this paper, letters exchanged during World War I were studied. Professional as well as personal issues were discussed. An international neurology meeting in Berne in September 1914 had to be cancelled due to the war. They hoped that (neuro)scientists would remain politically neutral, continue scientific cooperation, and even be able to influence the course of the war. Winkler and Monakow tried to continue their work on the International Brain Atlas. Although living in neutral countries (The Netherlands and Switzerland), they observed that their practice and scientific work suffered from war conditions. While Winkler continued his activities as a neurologist, Monakow, affected emotionally, experienced a change in scientific interest toward psychoneurology. He used his diaschisis concept, originally an explanation for transient phenomena in stroke, as a metaphor for the social and cultural effects of the war. He directly related cultural development and brain science, bringing in his own emotions, which resulted in the first of several publications on the relations between biology, brain science, and culture.

  6. Harriet Tubman: A Servant Leader?

    DTIC Science & Technology

    2012-04-24

    unknown. What has been 80 Winkler, 156. 81 Winkler, 156. 82 Kahlil Chism , “Harriet Tubman: Spy, Veteran, and Widow,” Magazine of History 19, no...Home for the Aged.98 94 Chism , 48. She died at age ninety-two on March 10, 1913, and was buried...and Emancipation: Black Enfranchisement in 1863 Louisiana.” Magazine oh History 21, no.1 (January 2007): 45-50. http://serach.proquest.com/. Chism

  7. Summary of data on the age of the Orca Group, Alaska: A section in The United States Geological Survey in Alaska: Accomplishments during 1984

    USGS Publications Warehouse

    Plafker, George; Keller, Gerta; Nelson, Steven W.; Dumoulin, Julie A.; Miller, Marti L.

    1985-01-01

    The Orca Group is a widespread, thick, complexly deformed accretionary sequence of flysch and tholeiitic basalt in the Prince William Sound area (Winkler, 1976; Winkler and Plafker, 1981) (fig. 49). Despite a number of extensive field studies of the Orca Group, reliable data on the age of the unit have been elusive. On the basis of sparse paleontologic and radiometric data, the sequence was regarded as Paleocene and early Eocene(?) age (Winkler and Plafker, 1981). New paleontologic data from fossil localities shown in figure 49 suggest that some strata assigned to the Orca Group are of middle Eocene age and possibly as young as late Eocene or Oligocene. However, data suggesting an age younger than about 50 Ma appear to be incompatible with radiometrically determined ages for plutons that intrude the Orca Group.

  8. Gas processing handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-04-01

    Brief details are given of processes including: BGC-Lurgi slagging gasification, COGAS, Exxon catalytic coal gasification, FW-Stoic 2-stage, GI two stage, HYGAS, Koppers-Totzek, Lurgi pressure gasification, Saarberg-Otto, Shell, Texaco, U-Gas, W-D.IGI, Wellman-Galusha, Westinghouse, and Winkler coal gasification processes; the Rectisol process; the Catacarb and the Benfield processes for removing CO/SUB/2, H/SUB/2s and COS from gases produced by the partial oxidation of coal; the selectamine DD, Selexol solvent, and Sulfinol gas cleaning processes; the sulphur-tolerant shift (SSK) process; and the Super-meth process for the production of high-Btu gas from synthesis gas.

  9. Mineral resource of the month: germanium

    USGS Publications Warehouse

    Guberman, David

    2010-01-01

    The article provides information on germanium, an element with electrical properties between those of a metal and an insulator. Applications of germanium include its use as a component of the glass in fiber-optic cable, in infrared optics devices and as a semiconductor and substrate used in electronic and solar applications. Germanium was first isolated by German chemist Clemens Winkler in 1886 and was named after Winkler's native country. In 2008, the leading sources of primary germanium from coal or zinc include Canada, China and Russia.

  10. Efficacy of Pitfall Trapping, Winkler and Berlese Extraction Methods for Measuring Ground-Dwelling Arthropods in Moist-Deciduous Forests in the Western Ghats

    PubMed Central

    Sabu, Thomas K.; Shiju, Raj T.

    2010-01-01

    The present study provides data to decide on the most appropriate method for sampling of ground-dwelling arthropods measured in a moist-deciduous forest in the Western Ghats in South India. The abundance of ground-dwelling arthropods was compared among large numbers of samples obtained using pitfall trapping, Berlese and Winkler extraction methods. Highest abundance and frequency of most of the represented taxa indicated pitfall trapping as the ideal method for sampling of ground-dwelling arthropods. However, with possible bias towards surface-active taxa, pitfall-trapping data is inappropriate for quantitative studies, and Berlese extraction is the better alternative. Berlese extraction is the better method for quantitative measurements than the other two methods, whereas pitfall trapping would be appropriate for qualitative measurements. A comparison of the Berlese and Winkler extraction data shows that in a quantitative multigroup approach, Winkler extraction was inferior to Berlese extraction because the total number of arthropods caught was the lowest; and many of the taxa that were caught from an identical sample via Berlese extraction method were not caught. Significantly a greater frequency and higher abundance of arthropods belonging to Orthoptera, Blattaria, and Diptera occurred in pitfall-trapped samples and Psocoptera and Acariformes in Berlese-extracted samples than that were obtained in the other two methods, indicating that both methods are useful, one complementing the other, eliminating a chance for possible under-representation of taxa in quantitative studies. PMID:20673122

  11. A Comparative Analysis of Unit Cohesion in Vietnam

    DTIC Science & Technology

    2014-06-13

    the political process, and result in a communist government; this became known as the Domino Theory . The United States replaced the French and...a theory of linkage; leveraging the political ties of Communist countries, namely the USSR and China, to influence North Vietnam, and deterrence; a...and Henderson provide some of the more popular theories about cohesion. Winkler points out that the relationship between stability, cohesion and

  12. Justification and refinement of Winkler–Fuss hypothesis

    NASA Astrophysics Data System (ADS)

    Kaplunov, J.; Prikazchikov, D.; Sultanova, L.

    2018-06-01

    Two-parametric asymptotic analysis of the equilibrium of an elastic half-space coated by a thin soft layer is developed. The initial scaling is motivated by the exact solution of the plane problem for a vertical harmonic load. It is established that the Winkler-Fuss hypothesis is valid only for a sufficiently high contrast in the stiffnesses of the layer and the half-space. As an alternative, a uniformly valid non-local approximation is proposed. Higher-order corrections to the Winkler-Fuss formulation, such as the Pasternak model, are also studied.

  13. Homogenization of Winkler-Steklov spectral conditions in three-dimensional linear elasticity

    NASA Astrophysics Data System (ADS)

    Gómez, D.; Nazarov, S. A.; Pérez, M. E.

    2018-04-01

    We consider a homogenization Winkler-Steklov spectral problem that consists of the elasticity equations for a three-dimensional homogeneous anisotropic elastic body which has a plane part of the surface subject to alternating boundary conditions on small regions periodically placed along the plane. These conditions are of the Dirichlet type and of the Winkler-Steklov type, the latter containing the spectral parameter. The rest of the boundary of the body is fixed, and the period and size of the regions, where the spectral parameter arises, are of order ɛ . For fixed ɛ , the problem has a discrete spectrum, and we address the asymptotic behavior of the eigenvalues {β _k^ɛ }_{k=1}^{∞} as ɛ → 0. We show that β _k^ɛ =O(ɛ ^{-1}) for each fixed k, and we observe a common limit point for all the rescaled eigenvalues ɛ β _k^ɛ while we make it evident that, although the periodicity of the structure only affects the boundary conditions, a band-gap structure of the spectrum is inherited asymptotically. Also, we provide the asymptotic behavior for certain "groups" of eigenmodes.

  14. Dunk Tank Hits the Mark at Take Your Child To Work Day | Poster

    Cancer.gov

    By Carolynne Keenan, Contributing Writer Robin Winkler-Pickett has known Jim Cherry, Ph.D., scientific program director, and Craig Reynolds, Ph.D., director, Office of Scientific Operations, both NCI at Frederick, for many years. “We’ve been friends for a long time.” So when she heard about the chance to dunk each of them at Take Your Child to Work Day (TYCTWD) on June 25, Winkler-Pickett, a research biologist in the Laboratory of Experimental Immunology, NCI Center for Cancer Research, knew she had to make time to participate.

  15. Definitive Determinations of the Solubilities of Oxygen and Hydrogen, and Verification of the Applicability of the Setschenow Relationship to Sea Water.

    DTIC Science & Technology

    1980-11-18

    International Ocean- defined an(1 of’ settling the conflict over ographic ’Tab-les for the solubility of oxygen in whether to use the Symbol a for the Bun... Wasser . Z. Ph\\vs. 42: 25:3-264. di I ite a(ineon s sil itiiln. 1. Oxygen. J. Soilntion WEISS, B. F. 1970. The sol ijbi iits’ of nitrogen. oxy- (,heart. 8...tile Winkler dissolved oxy- WINKLER, L.. W. 1889. Die Loslidikeit des Satter- getslomethlid. Limnlll. Oceanogr. 10; 141-143. stoaffs in Wasser . Ber

  16. A transect of metamorphic rocks along the Copper River, Cordova and Valdez Quadrangles, Alaska: A section in The United States Geological Survey in Alaska: Accomplishments during 1982

    USGS Publications Warehouse

    Miller, Marti L.; Dumoulin, Julie A.; Nelson, S.W.

    1984-01-01

    The lower Tertiary Orca Group is juxtaposed against the Upper Cretaceous Valdez Group along the Contact fault system (Winkler and Plafker, 1974, 198; Plafker and others, 1977)(fig. 33). In both groups, turbidites are the dominant rock type, with lesser mafic volcanic rocks (table 10). The Valdez Group, on the north, has traditionally been considered to be of higher metamorphic grade than the Orca Group (Moffit, 1954; Tysdal and Case, 1979; Winkler and Plafker, 198; Winkler and others, 1981). In 1982, we made a transect across the regional strike of the rocks and the contact between the two groups. The transect area follows the Copper River for 85 km from the Cordova quadrangle north into the Valdez quadrangle and extends for about 25 km on either side of the river (fig. 33). We planned, by systematic sampling of the area, to examine the metamorphic differences between the Orca and Valdez Groups. We found, however, that a strong thermal metamorphic event has overprinted and obscured regional metamorphic relations. We believe intrusion of Tertiary granite (fig. 33) to be responsible for this metamorphism. (Figures 33 and 34 and tables follow this article.)

  17. Overview: The Impact of Microbial Genomics on Food Safety

    NASA Astrophysics Data System (ADS)

    Milillo, Sara R.; Wiedmann, Martin; Hoelzer, Karin

    The first use of the term "genome" is attributed to Hans Winkler in his 1920 publication Verbeitung und Ursache der Parthenogenesis im Pflanzen und Tierreiche (Winkler, 1920). However, it was not until 1986 that the study of genomic concepts coalesced with the creation of a new journal by the same name (McKusick, 1997). The study of genomics was initially defined as the use or the application of "informatic tools" to study features of a sequenced genome (Strauss and Falkow, 1997). Today the field of genomics is typically considered to encompass efforts to determine the nucleic acid DNA sequence of an organism as well as the expression of genetic information using high-throughput, genome-wide methods, including transcriptomic, proteomic, and metabolomic analyses.

  18. Serum lipid levels were related to socio-demographic characteristics in a German population-based child cohort.

    PubMed

    Dathan-Stumpf, Anne; Vogel, Mandy; Rieger, Kristin; Thiery, Joachim; Hiemisch, Andreas; Kiess, Wieland

    2016-08-01

    Socio-demographic factors affect the development and lives of children and adolescents. We examined links between serum lipids and apolipoproteins and socio-demographic factors in the Leipzig Research Centre for Civilization Diseases Child (LIFE Child) study. The Winkler index and the Family Affluence Scale were used to define characteristics of the social status of 938 boys and 860 girls aged from birth to 19 years. We then used univariate and multivariate regression analyses to examine the socio-demographic impact on total cholesterol, low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL), cholesterol triglycerides and apolipoproteins A1 (ApoA1) and B (ApoB). No significant influences on the Winkler index or the Family Affluence Scale were observed regarding the concentrations of serum lipids for total cholesterol or LDL cholesterol. However, and most importantly, children and adolescents with high social status and high family affluence showed significantly higher HDL cholesterol and ApoA1 levels than those with lower individual totals. A higher Winkler index was associated with significantly lower values for triglycerides and ApoB. Adolescents with higher family wealth and social status showed a lower cardiovascular risk profile, as measured by the concentrations of HDL cholesterol and triglycerides as well as ApoA1 and B. ©2016 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  19. Publications | Buildings | NREL

    Science.gov Websites

    Documents Efficiency of Mini-Split Heat Pumps (Fact Sheet). NREL Highlights, Research & Development . Christensen, D.; Fang, X.; Tomerlin, J.; Winkler, J.; Hancock, E. (2011). Field Monitoring Protocol: Mini

  20. Jon Winkler | NREL

    Science.gov Websites

    residential HVAC systems, residential dehumidification control, energy modeling tools, and BEopt development software used by HVAC manufacturers. Education Ph.D. Mechanical Engineering, University of Maryland B.S

  1. Low NOx heavy fuel combustor concept program

    NASA Technical Reports Server (NTRS)

    White, D. J.; Kubasco, A. J.

    1982-01-01

    Three simulated coal gas fuels based on hydrogen and carbon monoxide were tested during an experimental evaluation with a rich lean can combustor: these were a simulated Winkler gas, Lurgi gas and Blue Water gas. All three were simulated by mixing together the necessary pure component species, to levels typical of fuel gases produced from coal. The Lurgi gas was also evaluated with ammonia addition. Fuel burning in a rich lean mode was emphasized. Only the Blue Water gas, however, could be operated in such fashion. This showed that the expected NOx signature form could be obtained, although the absolute values of NOx were above the 75 ppm goals for most operating conditions. Lean combustion produced very low NOx well below 75 ppm with the Winkler and Lurgi gases. In addition, these low levels were not significantly impacted by changes in operating conditions.

  2. Geology and ground-water resources of Winkler County, Texas

    USGS Publications Warehouse

    Garza, Sergio; Wesselman, John B.

    1963-01-01

    The chemical quality of the water in the principal aquifers is generally acceptable for industry and for public supply. About two-thirds of the samples collected from fresh-water wells had a dissolved-solids content of less than 1,000 ppm (parts per million) ; however, some samples in a few areas were hard and were high in fluoride and silica. Samples from wells in polluted areas contained dissolved solids ranging from about 1,400 to 71,100 ppm. Two comprehensive analyses of water samples from the Rustler formation showed a dissolved-solids content of 18,400 ppm. and 157,000 ppm. In most of the water produced with the oil in the Hendrick oil field, the content of dissolved solids ranged from about 4,000 to about 10,000 ppm. The water produced with the oil in the rest of the oil fields in Winkler County was mainly brine.

  3. Rarity and diversity in forest ant assemblages of Great Smoky Mountains National Park

    USGS Publications Warehouse

    Lessard, J.-P.; Dunn, R.R.; Parker, C.R.; Sanders, N.J.

    2007-01-01

    We report on a systematic survey of the ant fauna occurring in hardwood forests in the Great Smoky Mountains National Park. At 22-mixed hardwood sites, we collected leaf-litter ant species using Winkler samplers. At eight of those sites, we also collected ants using pitfall and Malaise traps. In total, we collected 53 ant species. As shown in other studies, ant species richness tended to decline with increasing elevation. Leaf-litter ant assemblages were also highly nested. Several common species were both locally abundant and had broad distributions, while many other species were rarely detected. Winkler samplers, pitfall traps, and Malaise traps yielded samples that differed in composition, but not richness, from one another. Taken together, our work begins to illuminate the factors that govern the diversity, distribution, abundance, and perhaps rarity of ants of forested ecosystems in the Great Smoky Mountains National Park.

  4. On the early history of field emission including attempts of tunneling spectroscopy

    NASA Astrophysics Data System (ADS)

    Kleint, C.

    1993-04-01

    Field emission is certainly one of the oldest surface science techniques, its roots reaching back about 250 years to the time of enlightenment. An account of very early studies and of later work is given but mostly restricted to Leipzig and to pre-Müllerian investigations. Studies of field emission from metal tips were carried out in the 18th century by Johann Heinrich Winkler who used vacuum pumps built by Jacob Leupold, a famous Leipzig mechanic. A short account of the career of Winkler will be given and his field emission experiments are illustrated. Field emission was investigated again in Leipzig much later by Julius Edgar Lilienfeld who worked on the improvement of X-ray tubes. He coined the terms ‘autoelektronische Entladung’ of ‘Äona-Effekt’ in 1922, and developed degassing procedures which are very similar to modern ultra-high vacuum processing. A pre-quantum mechanical explanation of the field emission phenomena was undertaken by Walter Schottky. Cunradi (1926) tried to measure temperature changes during field emission. Franz Rother, in a thesis (1914) suggested by Otto Wiener, dealt with the distance dependence of currents in vacuum between electrodes down to 20 nm. His habilitation in 1926 was an extension of his early work but now with field emission tips as a cathode. We might look at his measurements of the field emission characteristics in dependence on distance as a precursor to modern tunneling spectroscopy as well.

  5. An Eye-Tracking Study of Exploitations of Spatial Constraints in Diagrammatic Reasoning

    ERIC Educational Resources Information Center

    Shimojima, Atsushi; Katagiri, Yasuhiro

    2013-01-01

    Semantic studies on diagrammatic notations (Barwise & Etchemendy,; Shimojima,; Stenning & Lemon, ) have revealed that the "non-deductive," "emergent," or "perceptual" effects of diagrams (Chandrasekaran, Kurup, Banerjee, Josephson, & Winkler,; Kulpa,; Larkin & Simon,; Lindsay, ) are all rooted in the…

  6. Effectiveness of Winkler Litter Extraction and Pitfall Traps in Sampling Ant Communities and Functional Groups in a Temperate Forest.

    PubMed

    Mahon, Michael B; Campbell, Kaitlin U; Crist, Thomas O

    2017-06-01

    Selection of proper sampling methods for measuring a community of interest is essential whether the study goals are to conduct a species inventory, environmental monitoring, or a manipulative experiment. Insect diversity studies often employ multiple collection methods at the expense of researcher time and funding. Ants (Formicidae) are widely used in environmental monitoring owing to their sensitivity to ecosystem changes. When sampling ant communities, two passive techniques are recommended in combination: pitfall traps and Winkler litter extraction. These recommendations are often based on studies from highly diverse tropical regions or when a species inventory is the goal. Studies in temperate regions often focus on measuring consistent community response along gradients of disturbance or among management regimes; therefore, multiple sampling methods may be unnecessary. We compared the effectiveness of pitfalls and Winkler litter extraction in an eastern temperate forest for measuring ant species richness, composition, and occurrence of ant functional groups in response to experimental manipulations of two key forest ecosystem drivers, white-tailed deer and an invasive shrub (Amur honeysuckle). We found no significant effect of sampling method on the outcome of the ecological experiment; however, we found differences between the two sampling methods in the resulting ant species richness and functional group occurrence. Litter samples approximated the overall combined species richness and composition, but pitfalls were better at sampling large-bodied (Camponotus) species. We conclude that employing both methods is essential only for species inventories or monitoring ants in the Cold-climate Specialists functional group. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Lignite-to-methanol: an engineering evaluation of Winkler gasification and ICI methanol synthesis route. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goyen, S.; Baily, E.; Mawer, J.

    1980-10-01

    The objective of the work reported herein was to develop a preliminary conceptual design, capital requirements, and product cost for a lignite-to-methanol plant incorporating Winkler Gasification Technology and ICI Methanol synthesis. The lignite-to-methanol complex described herein is designed to produce 15,000 TPD of fuel grade methanol. The complex is designed to be self-sufficient with respect to all utility services, offsites, and other support facilities, including power generation. Following is a summary of the results of the study: (1) Tons per day (TPD) of Lignite Feedstock and Fuel (as received) was 47,770; (2) TPD of Fuel Grade Methanol Product was 15,000;more » (3) Thermal efficiency, % (HHV) was 47.4; (4) Plant investment expressed in terms of first quarter of 1980 was ($ Million) 1545; and (5) Applying the economic premises used by EPRI for fuel conversion plant utility type financing, the calculated levelized and first year product costs are included.« less

  8. Are Wikis Worth the Time?

    ERIC Educational Resources Information Center

    Shareski, Dean; Winkler, Carol Ann K.

    2006-01-01

    This article presents the opposing opinions of Dean Shareski and Carol Ann K. Winkler regarding the collaborative, anonymous collections of online information, such as Wikipedia. Dean Shareski, an educational consultant with the Moose Jaw Public School Division (Moose Jaw, Saskatchewan, Canada), contends that Wikipedia might be the best example of…

  9. A Simplified and Inexpensive Method for Measuring Dissolved Oxygen in Water.

    ERIC Educational Resources Information Center

    Austin, John

    1983-01-01

    A modified Winkler method for determining dissolved oxygen in water is described. The method does not require use of a burette or starch indicator, is simple and inexpensive and can be used in the field or laboratory. Reagents/apparatus needed and specific procedures are included. (JN)

  10. Workshop on Dynamic Fracture Held at Pasadena, California on 17-18 February 1983.

    DTIC Science & Technology

    1983-10-01

    class of materials seems to be the basis for deliberate attempts to devise (small) test geometries that lead to quai-static str fields under rapid loading...23- 22. Kalthoff, J.F., Beinert, J., and Winkler, S., "Einflu# dynamischer Effekte auf die Bestim- mung von Rioarfestzihigkeiten und auf die

  11. 75 FR 81846 - Expansion of the Santa Maria Valley Viticultural Area

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-29

    ... Authority Section 105(e) of the Federal Alcohol Administration Act (FAA Act), 27 U.S.C. 205(e), authorizes... Valley as a ``natural funnel-shaped'' valley.) Temperatures are consistent throughout the gentle west-to...,'' Albert J. Winkler, University of California Press, 1975, pages 61-64). Soils: According to the petition...

  12. Forces for Positive Change: Preparing Leaders for the 21st Century in an Undergraduate Honors Program

    ERIC Educational Resources Information Center

    Polk, Denise M.

    2014-01-01

    Leadership education is offered in myriad ways at many institutions of higher education (Borgese, Deutsch, & Winkler, 2004). This article highlights the West Chester University Honors College (WCUHC), a highly selective, four-year program for undergraduate students. The WCUHC instituted a liberal education, interdisciplinary approach to…

  13. Kafka: A Collection of Critical Essays. Twentieth Century Views Series.

    ERIC Educational Resources Information Center

    Gray, Ronald, Ed.

    One of a series of works aimed at presenting contemporary critical opinion on major authors, this collection includes essays by Ronald Gray, Edwin Muir, Friedrich Beissner, R. O. C. Winkler, Johannes Pfeiffer, Caroline Gordon, Idris Parry, Edmund Wilson, Erich Heller, Austin Warren, Eliseo Vivas, Albert Camus, Martin Buber, and H. S. Reiss--all…

  14. Biochemical Oxygen Demand and Dissolved Oxygen. Training Module 5.105.2.77.

    ERIC Educational Resources Information Center

    Kirkwood Community Coll., Cedar Rapids, IA.

    This document is an instructional module package prepared in objective form for use by an instructor familiar with the azide modification of the Winkler dissolved oxygen test and the electronic dissolved oxygen meter test procedures for determining the dissolved oxygen and the biochemical oxygen demand of a wastewater sample. Included are…

  15. Investigation of contact pressure and influence function model for soft wheel polishing.

    PubMed

    Rao, Zhimin; Guo, Bing; Zhao, Qingliang

    2015-09-20

    The tool influence function (TIF) is critical for calculating the dwell-time map to improve form accuracy. We present the TIF for the process of computer-controlled polishing with a soft polishing wheel. In this paper, the static TIF was developed based on the Preston equation. The pressure distribution was verified by the real removal spot section profiles. According to the experiment measurements, the pressure distribution simulated by Hertz contact theory was much larger than the real contact pressure. The simulated pressure distribution, which was modeled by the Winkler elastic foundation for a soft polishing wheel, matched the real contact pressure. A series of experiments was conducted to obtain the removal spot statistical properties for validating the relationship between material removal and processing time and contact pressure and relative velocity, along with calculating the fitted parameters to establish the TIF. The developed TIF predicted the removal character for the studied soft wheel polishing.

  16. Defining Strong State Accountability Systems: How Can Better Standards Gain Greater Traction? A First Look

    ERIC Educational Resources Information Center

    Reed, Eileen; Scull, Janie; Slicker, Gerilyn; Winkler, Amber M.

    2012-01-01

    Rigorous standards and aligned assessments are vital tools for boosting education outcomes but they have little traction without strong accountability systems that attach consequences to performance. In this pilot study, Eileen Reed, Janie Scull, Gerilyn Slicker, and Amber Winkler lay out the essential features of such accountability systems,…

  17. Estimation of the Friction Coefficient of a Nanostructured Composite Coating

    NASA Astrophysics Data System (ADS)

    Shil'ko, S. V.; Chernous, D. A.; Ryabchenko, T. V.; Hat'ko, V. V.

    2017-11-01

    The frictional-mechanical properties of a thin polymer-ceramic coating obtained by gas-phase impregnation of nanoporous anodic alumina with a fluoropolymer (octafluorocyclobutane) have been investigated. The coefficient of sliding friction of the coating is predicted based on an analysis of contact deformation within the framework of the Winkler elastic foundation hypothesis and a three-phase micromechanical model. It is shown that an acceptable prediction accuracy can be obtained considering the uniaxial strain state of the coating. It was found that, on impregnation by the method of plasmachemical treatment, the relative depth of penetration of the polymer increased almost in proportion to the processing time. The rate and maximum possible depth of penetration of the polymer into nanoscale pores grew with increasing porosity of the alumina substrate.

  18. Dissolved oxygen measurements in aquatic environments: the effects of changing temperature and pressure on three sensor technologies.

    PubMed

    Markfort, Corey D; Hondzo, Miki

    2009-01-01

    Dissolved oxygen (DO) is probably the most important parameter related to water quality and biological habitat in aquatic environments. In situ DO sensors are some of the most valuable tools used by scientists and engineers for the evaluation of water quality in aquatic ecosystems. Presently, we cannot accurately measure DO concentrations under variable temperature and pressure conditions. Pressure and temperature influence polarographic and optical type DO sensors compared to the standard Winkler titration method. This study combines laboratory and field experiments to compare and quantify the accuracy and performance of commercially available macro and micro Clark-type oxygen sensors as well as optical sensing technology to the Winkler method under changing pressure and temperature conditions. Field measurements at various lake depths revealed sensor response time up to 11 min due to changes in water temperature, pressure, and DO concentration. Investigators should account for transient response in DO sensors before measurements are collected at a given location. We have developed an effective model to predict the transient response time for Clark-type oxygen sensors. The proposed procedure increases the accuracy of DO data collected in situ for profiling applications.

  19. Deflection of Resilient Materials for Reduction of Floor Impact Sound

    PubMed Central

    Lee, Jung-Yoon; Kim, Jong-Mun

    2014-01-01

    Recently, many residents living in apartment buildings in Korea have been bothered by noise coming from the houses above. In order to reduce noise pollution, communities are increasingly imposing bylaws, including the limitation of floor impact sound, minimum thickness of floors, and floor soundproofing solutions. This research effort focused specifically on the deflection of resilient materials in the floor sound insulation systems of apartment houses. The experimental program involved conducting twenty-seven material tests and ten sound insulation floating concrete floor specimens. Two main parameters were considered in the experimental investigation: the seven types of resilient materials and the location of the loading point. The structural behavior of sound insulation floor floating was predicted using the Winkler method. The experimental and analytical results indicated that the cracking strength of the floating concrete floor significantly increased with increasing the tangent modulus of resilient material. The deflection of the floating concrete floor loaded at the side of the specimen was much greater than that of the floating concrete floor loaded at the center of the specimen. The Winkler model considering the effect of modulus of resilient materials was able to accurately predict the cracking strength of the floating concrete floor. PMID:25574491

  20. Deflection of resilient materials for reduction of floor impact sound.

    PubMed

    Lee, Jung-Yoon; Kim, Jong-Mun

    2014-01-01

    Recently, many residents living in apartment buildings in Korea have been bothered by noise coming from the houses above. In order to reduce noise pollution, communities are increasingly imposing bylaws, including the limitation of floor impact sound, minimum thickness of floors, and floor soundproofing solutions. This research effort focused specifically on the deflection of resilient materials in the floor sound insulation systems of apartment houses. The experimental program involved conducting twenty-seven material tests and ten sound insulation floating concrete floor specimens. Two main parameters were considered in the experimental investigation: the seven types of resilient materials and the location of the loading point. The structural behavior of sound insulation floor floating was predicted using the Winkler method. The experimental and analytical results indicated that the cracking strength of the floating concrete floor significantly increased with increasing the tangent modulus of resilient material. The deflection of the floating concrete floor loaded at the side of the specimen was much greater than that of the floating concrete floor loaded at the center of the specimen. The Winkler model considering the effect of modulus of resilient materials was able to accurately predict the cracking strength of the floating concrete floor.

  1. Potential improvement of Schmidt-hammer exposure-age dating (SHD) of moraines in the Southern Alps, New Zealand, by application of the new electronic Schmidt-hammer (SilverSchmidt)

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Corbett, David

    2014-05-01

    The Southern Alps of New Zealand are among the few key study sites for investigating Holocene glacier chronologies in the mid-latitudinal Southern Hemisphere. Their characteristic highly dynamic geomorphological process systems prove, however, to be a considerable challenge for all attempts to date and palaeoclimatologically interpret the existing Holocene moraines record. As a multi-proxy approach combining 10Be terrestrial cosmogenic nuclide dating (TCND) with Schmidt-hammer testing, the recently developed Schmidt-hammer exposure-age dating (SHD) has already shown its potential in this study area (cf. Winkler 2005, 2009, 2013). An electronic Schmidt-hammer (named SilverSchmidt) was introduced by the manufacturer of the original mechanical Schmidt-hammer (Proceq SA) a few years ago. It offers, in particular, facilities for much easier data processing and constitutes a major improvement and potential replacement for the mechanical Schmidt-hammer. However, its different approach to the measurement of surface hardness - based on Q-(velocity) values instead of R-(rebound) values - is a potential drawback. This difference effectively means that measurements from the two instruments are not easily interconvertible and, hence, that the instruments cannot be used interchangeably without previous comparative tests of both instruments under field conditions. Both instruments used in this comparative study were N-type models with identical impact energy of 2.207 Nm for the plunger. To compare both instruments and explore interconvertibility, parallel measurements were performed on a selected number of boulders (10 boulders per site with 5 impacts each, at least 2 sites per moraine) on moraines of homogeneous lithology but different established ages covering the entire Holocene and the Late Glacial. All moraines are located east of the Main Divide of the Southern Alps at Mueller Glacier, Tasman Glacier, and in the outer Tasman River Valley. All paired samples (n = 50) were collected so that the plunger impacts of both instruments were set close together on the rock surface (to avoid any influence of modifications to the surface by consecutive impacts on the same spot). In order to test their performance at the higher and lower end of surface hardness, similar paired sample tests were also made on the full-metal test anvil. The results of paired samples for all sites/moraines reveal that Q-/R-value pairs are closely clustered for young surfaces but more scattered for the older ones with a corresponding moderate R2 for a calculated linear trend. The greater variability of the older, weathered surfaces with greater scatter and hence higher standard deviations and broader confidence intervals has been recognised in numerous previous Schmidt-hammer studies and is elated to the effects of micro-scale lithological variability, which becomes a more pronounced influence with time exposed to subaerial weathering. But most important, Q-values and R-values are closely related and Q-values are systematically higher than R-values by c. 10 - 12 units over most of the operational range of both instruments. Linear conversion equations indicate a conversion factor in the order of + 11 units is applicable when converting R-values to Q-values. These estimates agree well with data obtained on the standard test anvil. Given the apparent interconvertibility of the two instruments, the SilverSchmidt is regarded as a potential replacement for the mechanical Schmidt hammer. This enables, moreover, continuity in study areas with existing R-value data archives. However, when comparing data sets of different age, adjustments must be made for any changes to the instrumental calibration value over time. References: Winkler, S. (2005): The 'Schmidt hammer' as a relative-age dating technique: potential and limitations of its application on Holocene moraines in Mt Cook National Park, Southern Alps, New Zealand. New Zealand Journal of Geology and Geophysics 48, 105 - 116. Winkler, S. (2009): First attempt to combine terrestrial cosmogenic nuclide (10Be) and Schmidt hammer relative-age dating: Strauchon Glacier, Southern Alps, New Zealand. Central European Journal of Geosciences 1, 274 - 290. Winkler, S. (2013): Investigation of late-Holocene moraines in the western Southern Alps, New Zealand, applying Schmidt-hammer exposure-age dating (SHD). The Holocene (online), doi: 10.1177/0959683613512169.

  2. NREL Engineers Look for a Cool Way to Make AC Units an Affordable Snap |

    Science.gov Websites

    installing the components of the EcoSnap-AC. Photo by Dennis Schroeder Engineers Chuck Booten and Jon Winkler Booten drills a hole in the wall to mount the EcoSnap-AC. Photo by Dennis Schroeder The Evolution of an , and eliminating air leaks and water intrusion. Photo by Dennis Schroeder Looking Ahead to a Cooler

  3. A multiplet table for Mn I (Adelman, Svatek, Van Winkler, Warren 1989): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Adelman, Saul J.

    1989-01-01

    The machine-readable version of the multiplet table, as it is currently being distributed from the Astronomical Data Center, is described. The computerized version of the table contains data on excitation potentials, J values, multiplet terms, intensities of the transitions, and multiplet numbers. Files ordered by multiplet and by wavelength are included in the distributed version.

  4. The Effect of Adaptive Nonlinear Frequency Compression on Phoneme Perception.

    PubMed

    Glista, Danielle; Hawkins, Marianne; Bohnert, Andrea; Rehmann, Julia; Wolfe, Jace; Scollie, Susan

    2017-12-12

    This study implemented a fitting method, developed for use with frequency lowering hearing aids, across multiple testing sites, participants, and hearing aid conditions to evaluate speech perception with a novel type of frequency lowering. A total of 8 participants, including children and young adults, participated in real-world hearing aid trials. A blinded crossover design, including posttrial withdrawal testing, was used to assess aided phoneme perception. The hearing aid conditions included adaptive nonlinear frequency compression (NFC), static NFC, and conventional processing. Enabling either adaptive NFC or static NFC improved group-level detection and recognition results for some high-frequency phonemes, when compared with conventional processing. Mean results for the distinction component of the Phoneme Perception Test (Schmitt, Winkler, Boretzki, & Holube, 2016) were similar to those obtained with conventional processing. Findings suggest that both types of NFC tested in this study provided a similar amount of speech perception benefit, when compared with group-level performance with conventional hearing aid technology. Individual-level results are presented with discussion around patterns of results that differ from the group average.

  5. Reliability of a Measure of Institutional Discrimination against Minorities

    DTIC Science & Technology

    1979-12-01

    samples are presented. The first is based upon classical statistical theory and the second derives from a series of computer-generated Monte Carlo...Institutional racism and sexism . Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1978. Hays, W. L. and Winkler, R. L. Statistics : probability, inference... statistical measure of the e of institutional discrimination are discussed. Two methods of dealing with the problem of reliability of the measure in small

  6. Cooperative Security in Northeast Asia: Ramifications of Change in the U.S. and ROK Maritime Strategies

    DTIC Science & Technology

    2002-09-01

    maritime defense preparations had taken place in the South. Under the direction of then Lieutenant Commander Sohn Won Yil, a Maritime Affairs...role in capturing and destroying several of 71 “The Late Admiral Sohn Won -Yil, The Founder of the ROK...officers 74 Winkler, 18. 75 “The Late Admiral Sohn Won -Yil, The Founder of the ROK Navy (1909-1980

  7. Solution of Peter Winkler's Pizza Problem

    NASA Astrophysics Data System (ADS)

    Cibulka, Josef; Kynčl, Jan; Mészáros, Viola; Stolař, Rudolf; Valtr, Pavel

    Bob cuts a pizza into slices of not necessarily equal size and shares it with Alice by alternately taking turns. One slice is taken in each turn. The first turn is Alice's. She may choose any of the slices. In all other turns only those slices can be chosen that have a neighbor slice already eaten. We prove a conjecture of Peter Winkler by showing that Alice has a strategy for obtaining 4/9 of the pizza. This is best possible, that is, there is a cutting and a strategy for Bob to get 5/9 of the pizza. We also give a characterization of Alice's best possible gain depending on the number of slices. For a given cutting of the pizza, we describe a linear time algorithm that computes Alice's strategy gaining at least 4/9 of the pizza and another algorithm that computes the optimal strategy for both players in any possible position of the game in quadratic time. We distinguish two types of turns, shifts and jumps. We prove that Alice can gain 4/9, 7/16 and 1/3 of the pizza if she is allowed to make at most two jumps, at most one jump and no jump, respectively, and the three constants are the best possible.

  8. High 5-hydroxymethylfurfural concentrations are found in Malaysian honey samples stored for more than one year.

    PubMed

    Khalil, M I; Sulaiman, S A; Gan, S H

    2010-01-01

    5-Hydroxymethylfurfural (HMF) content is an indicator of the purity of honey. High concentrations of HMF in honey indicate overheating, poor storage conditions and old honey. This study investigated the HMF content of nine Malaysian honey samples, as well as the correlation of HMF formation with physicochemical properties of honey. Based on the recommendation by the International Honey Commission, three methods for the determination of HMF were used: (1) high performance liquid chromatography (HPLC), (2) White spectrophotometry and (3) Winkler spectrophotometry methods. HPLC and White spectrophotometric results yielded almost similar values, whereas the Winkler method showed higher readings. The physicochemical properties of honey (pH, free acids, lactones and total acids) showed significant correlation with HMF content and may provide parameters that could be used to make quick assessments of honey quality. The HMF content of fresh Malaysian honey samples stored for 3-6 months (at 2.80-24.87 mg/kg) was within the internationally recommended value (80 mg/kg for tropical honeys), while honey samples stored for longer periods (12-24 months) contained much higher HMF concentrations (128.19-1131.76 mg/kg). Therefore, it is recommended that honey should generally be consumed within one year, regardless of the type. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  9. Unfocused Energy: A Strategic Approach to U.S. Communications in Afghanistan

    DTIC Science & Technology

    2010-05-21

    of public relations through mass manipulation. Bernays professionally espoused the notion that in a democracy, the 14 Allan M . Winkler, The Politics...2009. 37 BIBLIOGRAPHY Al-Imarat, M . The Information Revolution and the Arab world: Its Impact on State and Society. Abu Dhabi, UAE, 1998...Dizard, W. P., The Strategy of Truth: The Story of the U.S. Information Agency, Washington D.C: Public Affairs Press, 1961. Donini , Antonio, Norah

  10. Dunk Tank Hits the Mark at Take Your Child To Work Day | Poster

    Cancer.gov

    By Carolynne Keenan, Contributing Writer Robin Winkler-Pickett has known Jim Cherry, Ph.D., scientific program director, and Craig Reynolds, Ph.D., director, Office of Scientific Operations, both NCI at Frederick, for many years. “We’ve been friends for a long time.” So when she heard about the chance to dunk each of them at Take Your Child to Work Day (TYCTWD) on June 25,

  11. A Systems Biology Approach to Heat Stress, Heat Injury and Heat Stroke

    DTIC Science & Technology

    2015-01-01

    Winkler et al., “Computational lipidology: predicting lipoprotein density profiles in human blood plasma,” PLoS Comput Biol, 4(5), e1000079 (2008). [74...other organs at high risk for injury, such as liver and kidney [24, 25]. 2.1 Utility of the computational model Molecular indicators of heat...induced heart injury had a large shift in relative abundance of proteins with high supersaturation scores, suggesting increased abundance of

  12. Montgomery Point Lock and Dam, White River, Arkansas

    DTIC Science & Technology

    2016-01-01

    ER D C/ CH L TR -1 6- 1 Monitoring Completed Navigation Projects (MCNP) Program Montgomery Point Lock and Dam, White River, Arkansas Co...Navigation Projects (MCNP) Program ERDC/CHL TR-16-1 January 2016 Montgomery Point Lock and Dam, White River, Arkansas Allen Hammack, Michael Winkler, and...20314-1000 Under MCNP Work Unit: Montgomery Point Lock and Dam, White River, Arkansas ERDC/CHL TR-16-1 ii Abstract Montgomery Point Lock and

  13. Army Logistician. Volume 39, Issue 4, July-August 2007

    DTIC Science & Technology

    2007-08-01

    because they fit underneath by CaPtain joy a. sChMaLzLe Battlefield Vision: Eyeglasses for the Soldier ARMy LOGISTICIAN PROFESSIONAL BULLETIN...a lens to fit into the frame . To create sunglasses, lenses are placed into a tint bath until they reach the desired darkness. The OptiCast system...Staff Sergeant Michael P. Winkler, USAR 28 Battlefield Vision: Eyeglasses for the Soldier—Captain Joy A. Schmalzle 31 Tiedown for Safety and

  14. Refractive Index of Silicon and Germanium and Its Wavelength and Temperature Derivatives.

    DTIC Science & Technology

    1979-03-01

    of a misnomer and, although Clemens Winkler is cred- ited with the discovery of the element in 1886, germanium has become an element of interest in...rather small in covalent semiconductors like Si and Ge, it increases, however, with increasing polarity. Both the radio -frequency mea- surement and...temperature region 250-480 K, but nonlinearity progressively predominates at lower temperatures, as seen from figure 7. Lukes and Schmidt [18] studied

  15. Failure Processes in Embedded Monolayer Graphene under Axial Compression

    PubMed Central

    Androulidakis, Charalampos; Koukaras, Emmanuel N.; Frank, Otakar; Tsoukleri, Georgia; Sfyris, Dimitris; Parthenios, John; Pugno, Nicola; Papagelis, Konstantinos; Novoselov, Kostya S.; Galiotis, Costas

    2014-01-01

    Exfoliated monolayer graphene flakes were embedded in a polymer matrix and loaded under axial compression. By monitoring the shifts of the 2D Raman phonons of rectangular flakes of various sizes under load, the critical strain to failure was determined. Prior to loading care was taken for the examined area of the flake to be free of residual stresses. The critical strain values for first failure were found to be independent of flake size at a mean value of –0.60% corresponding to a yield stress up to -6 GPa. By combining Euler mechanics with a Winkler approach, we show that unlike buckling in air, the presence of the polymer constraint results in graphene buckling at a fixed value of strain with an estimated wrinkle wavelength of the order of 1–2 nm. These results were compared with DFT computations performed on analogue coronene/PMMA oligomers and a reasonable agreement was obtained. PMID:24920340

  16. Molecular Analysis of Motility in Metastatic Mammary Adenocarcinoma Cells

    DTIC Science & Technology

    1996-09-01

    elements of epidermoid carcinoma (A43 1) cells. J. Cell. Biol. 103: 87-94 Winkler, M. (1988). Translational regulation in sea urchin eggs: a complex...and Methods. Error bars show SEM . Figure 2. Rhodamine-actin polymerizes preferentially at the tips of lamellipods in EGF- stimulated cells. MTLn3...lamellipods. B) rhodamine-actin intensity at the cell center. Data for each time point is the average and SEM of 15 different cells. Images A and B

  17. Building Maintenance and Repair Data for Life-Cycle Cost Analyses: Electrical Systems.

    DTIC Science & Technology

    1991-05-01

    Repair Data for Life-Cycle Cost Analyses: Electrical Systems by Edgar S. Neely Robert D. Neathammer James R. Stirn Robert P. Winkler This research...systems have been developed to assist planners in preparing DD Form 1391 documentation, designers in life-cycle cost component selection, and maintainers...Maintenance and Repair Data for Life-Cycle Cost Analyses: RDTE dated 1980 Electrical Systems REIMB 1984 - 1989 6. AUTH4OR(S) Edgar S. Neely, Robert D

  18. Physical, Nutrient, and Biological Measurements of Coastal Waters off Central California in March 2012

    DTIC Science & Technology

    2012-10-01

    Salinity Scale, 1978 ( UNESCO , 1981). Dissolved oxygen (Winkler) samples were collected at CTD stations 2, 6, 10, 16, 17, and 19. These were...the Farallones. Deep-Sea Res. II 47: 907- 946. UNESCO . Background papers and supporting data on the Practical Salinity Scale, 1978. 1981... UNESCO Tech. Pap. In: Mar. Sci. 37. Venrick, E. L., and T. L. Hayward. 1984. Determining chlorophyll on the 1984 CalCOFI surveys. CalCOFI Rep

  19. The general solution to the classical problem of finite Euler Bernoulli beam

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Amba-Rao, C. L.

    1977-01-01

    An analytical solution is obtained for the problem of free and forced vibrations of a finite Euler Bernoulli beam with arbitrary (partially fixed) boundary conditions. The effects of linear viscous damping, Winkler foundation, constant axial tension, a concentrated mass, and an arbitrary forcing function are included in the analysis. No restriction is placed on the values of the parameters involved, and the solution presented here contains all cited previous solutions as special cases.

  20. Dynamics of Inhomogeneous Shell Systems Under Non-Stationary Loading (Survey)

    NASA Astrophysics Data System (ADS)

    Lugovoi, P. Z.; Meish, V. F.

    2017-09-01

    Experimental works on the determination of dynamics of smooth and stiffened cylindrical shells contacting with a soil medium under various non-stationary loading are reviewed. The results of studying three-layer shells of revolution whose motion equations are obtained within the framework of the hypotheses of the Timoshenko geometrically nonlinear theory are stated. The numerical results for shells with a piecewise or discrete filler enable the analysis of estimation of the influence of geometrical and physical-mechanical parameters of structures on their dynamics and reveal new mechanical effects. Basing on the classical theory of shells and rods, the effect of the discrete arrangement of ribs and coefficients of the Winkler or Pasternak elastic foundation on the normal frequencies and modes of rectangular planar cylindrical and spherical shells is studied. The number and shape of dispersion curves for longitudinal harmonic waves in a stiffened cylindrical shell are determined. The equations of vibrations of ribbed shells of revolution on Winkler or Pasternak elastic foundation are obtained using the geometrically nonlinear theory and the Timoshenko hypotheses. On applying the integral-interpolational method, numerical algorithms are developed and the corresponding non-stationary problems are solved. The special attention is paid to the statement and solution of coupled problems on the dynamical interaction of cylindrical or spherical shells with the soil water-saturated medium of different structure.

  1. U.S. Department of Defense Experiences with Substituting Government Employees for Military Personnel: Challenges and Opportunities

    DTIC Science & Technology

    2016-01-01

    Positions to Civilian Positions, Washington, D.C., GAO-08-370R, 2008. 12 Asch and Winkler, 2013. 13 Keller et al ., 2013. 14 Whitley et al ., 2014. 15 CBO...and Army were instead relying on composite military rates that did not account for such factors as training and recruitment.19 As Whitley et al . show...Calif.: MG-598-OSD, 2007. 19 GAO, 2008. 20 Whitley et al ., 2014. Best Practices for Employing Military-to-Civilian Conversions 71 workers.21

  2. On the asymptotic behavior of radial entire solutions for the equation (-Δ)3u = up in Rn

    NASA Astrophysics Data System (ADS)

    Tai, Nguyen Tien

    2018-03-01

    Our main task in this note is to prove the existence and to classify the exact growth at infinity of radial positive C6-solutions of (- Δ) 3 u =up in Rn, where n ⩾ 15 and p is bounded from below by the sixth-order Joseph-Lundgren exponent. Following the main work of Winkler, we introduce the sub- and super-solution method and comparison principle to conclude the asymptotic behavior of solutions.

  3. Structural Analysis Computer Programs for Rigid Multicomponent Pavement Structures with Discontinuities--WESLIQID and WESLAYER. Report 1. Program Development and Numerical Presentations.

    DTIC Science & Technology

    1981-05-01

    represented as a Winkler foundation. The program can treat any number of slabs connected by steel bars or other load trans- fer devices at the joints...dimensional finite element method. The inherent flexibility of such an approach permits the analysis of a rigid pavement with steel bars and stabilized...layers and provides an efficient tool for analyzing stress conditions at the joint. Unfor- tunately, such a procedure would require a tremendously

  4. VizieR Online Data Catalog: GMOS spectroscopic obs. of SNR candidates in M83 (Winkler+, 2017)

    NASA Astrophysics Data System (ADS)

    Winkler, P. F.; Blair, W. P.; Long, K. S.

    2017-11-01

    We used the GMOS on the 8.2m Gemini-South telescope to obtain all the spectra reported here. Most were obtained in a classically scheduled observing run on 2011 April 7-9 (UT); masks 1-7. We later obtained spectra for two additional masks (which we refer to as masks 8 and 9 for simplicity) in a queue-scheduled program (GS-2015A-Q-90) during the 2015A semester. (6 data files).

  5. Models of Information Aggregation Pertaining to Combat Identification: A Review of the Literature (Modele du Regroupement de L’information Concernant L’identification au Combat: Une Analyse Documentaire)

    DTIC Science & Technology

    2007-04-01

    communication from the gunner who is able to offer enhanced visual information about the entity (e.g., insignia, type of weaponry) or radio contact may...1999 (Fuzzy Logic); Clemen & Winkler, in press (Bayes Theorem); Sentz & Ferson, 2002 (Dempster-Shafer)). Humansystems® Combat Identification...incidents when other units get lost and appear in unexpected locations. The formation radios for additional information from the operations officer

  6. Impact of Nonlinearity of The Contact Layer Between Elements Joined in a Multi-Bolted System on Its Preload

    NASA Astrophysics Data System (ADS)

    Grzejda, R.

    2017-12-01

    The paper deals with modelling and calculations of asymmetrical multi-bolted joints at the assembly stage. The physical model of the joint is based on a system composed of four subsystems, which are: a couple of joined elements, a contact layer between the elements, and a set of bolts. The contact layer is assumed as the Winkler model, which can be treated as a nonlinear or linear model. In contrast, the set of bolts are modelled using simplified beam models, known as spider bolt models. The theorem according to which nonlinearity of the contact layer has a negligible impact on the final preload of the joint in the case of its sequential tightening has been verified. Results of sample calculations for the selected multi-bolted system, in the form of diagrams of preloads in the bolts as well as normal contact pressure between the joined elements during the assembly process and at its end, are presented.

  7. Beams on nonlinear elastic foundation

    NASA Astrophysics Data System (ADS)

    Lukkassen, Dag; Meidell, Annette

    2014-12-01

    In order to determination vertical deflections and rail bending moments the Winkler model (1867) is often used. This linear model neglects several conditions. For example, by using experimental results, it has been observed that there is a substantial increase in the maximum rail deflection and rail bending moment when considering the nonlinearity of the track support system. A deeper mathematical analysis of the models is necessary in order to obtain better methods for more accurate numerical solutions in the determination of deflections and rail bending moments. This paper is intended to be a small step in this direction.

  8. Analysis and Evaluation of German Attainments and Research in the Liquid Rocket Engine Field. Volume 4. Propellant Injectors

    DTIC Science & Technology

    1951-02-01

    they were ob- served at a given pressure drop in "cold" testing with water or unreacted propellants. heat-transfer considerations and the location of... water as a coolant in the main chamber. The Winkler injector was used.on a test unit developing a thrust of 220 lb and an exhaust ve- locity of 6370 ft... water . Provision was made for an igniter in the center of the injector. The relatively high performance reported for this unit does not seem to be

  9. Net community production from autonomous oxygen observations in the Sargasso Sea

    NASA Astrophysics Data System (ADS)

    Feen, M.; Estapa, M. L.

    2016-02-01

    Optical sensors on autonomous floats provide high-resolution profiles of oxygen concentration over time. Improved spatiotemporal resolution in our measurements of oxygen will allow for better estimates of net community production and a greater understanding of the biological pump. Two autonomous profiling floats (NAVIS BGCi, Sea-Bird) equipped with SBE-63 optodes to measure dissolved oxygen were deployed in the Sargasso Sea on a series of five Bermuda Atlantic Time-series Study (BATS) cruises from July 2013 to April 2014. In situ calibration of the oxygen sensors to Winkler titration bottle samples at BATS did not show systematic drift in the oxygen sensors over time. Calibrations were applied to determine oxygen concentrations in profiles collected in the Sargasso Sea at 1.5 to 2.5 day intervals over a year. Oxygen concentrations were used to quantify sub-mixed layer net community production. Changes in production rates from this study were compared with upper water column biology and particle flux measurements obtained independently from optical sensors on the profiling floats, allowing us to examine processes controlling carbon export into the deep ocean.

  10. The Detection of Gravitational Waves

    NASA Astrophysics Data System (ADS)

    Blair, David G.

    2005-10-01

    Part I. An Introduction to Gravitational Waves and Methods for their Detection: 1. Gravitational waves in general relativity D. G. Blair; 2. Sources of gravitational waves D. G. Blair; 3. Gravitational wave detectors D. G. Blair; Part II. Gravitational Wave Detectors: 4. Resonant-bar detectors D. G. Blair; 5. Gravity wave dewars W. O. Hamilton; 6. Internal friction in high Q materials J. Ferreirinko; 7. Motion amplifiers and passive transducers J. P. Richard; 8. Parametric transducers P. J. Veitch; 9. Detection of continuous waves K. Tsubono; 10. Data analysis and algorithms for gravitational wave-antennas G. V. Paalottino; Part III. Laser Interferometer Antennas: 11. A Michelson interferometer using delay lines W. Winkler; 12. Fabry-Perot cavity gravity-wave detectors R. W. P. Drever; 13. The stabilisation of lasers for interferometric gravitational wave detectors J. Hough; 14. Vibration isolation for the test masses in interferometric gravitational wave detectors N. A. Robertson; 15. Advanced techniques A. Brillet; 16. Data processing, analysis and storage for interferometric antennas B. F. Schutz; 17. Gravitational wave detection at low and very low frequencies R. W. Hellings.

  11. Mechanics of Granular Materials (MGM) Investigators

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Key persornel in the Mechanics of Granular Materials (MGM) experiment at the University of Colorado at Boulder include Tawnya Ferbiak (software engineer), Susan Batiste (research assistant), and Christina Winkler (graduate research assistant). Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that cannot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: University of Colorado at Boulder).

  12. Microgravity

    NASA Image and Video Library

    2000-07-01

    Key persornel in the Mechanics of Granular Materials (MGM) experiment at the University of Colorado at Boulder include Tawnya Ferbiak (software engineer), Susan Batiste (research assistant), and Christina Winkler (graduate research assistant). Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. MGM experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditions that cannot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. (Credit: University of Colorado at Boulder).

  13. 'Incongruous juxtapositions': the chimaera and Mrs McK.

    PubMed

    Martin, Aryn

    2007-09-01

    A century ago, the German botanist Hans Winkler (best known for coining the term 'genome') accomplished two novel transplantations. First, he produced a single plant that grafted together two completely disparate species: tomato and nightshade. Second, he chose the descriptive word 'chimaera' to name his innovation, transplanting the term from mythology to biology. This paper features Mrs McK, the first human chimera, and thus follows the term from botany to clinical medicine. Her remarkable story, pieced together from the notes, drafts and correspondence of Robert Race and his colleagues at the MRC Blood Group Unit, draws attention to the significance of names and naming.

  14. Wave dispersion of carbon nanotubes conveying fluid supported on linear viscoelastic two-parameter foundation including thermal and small-scale effects

    NASA Astrophysics Data System (ADS)

    Sina, Nima; Moosavi, Hassan; Aghaei, Hosein; Afrand, Masoud; Wongwises, Somchai

    2017-01-01

    In this paper, for the first time, a nonlocal Timoshenko beam model is employed for studying the wave dispersion of a fluid-conveying single-walled carbon nanotube on Viscoelastic Pasternak foundation under high and low temperature change. In addition, the phase and group velocity for the nanotube are discussed, respectively. The influences of Winkler and Pasternak modulus, homogenous temperature change, steady flow velocity and damping factor of viscoelastic foundation on wave dispersion of carbon nanotubes are investigated. It was observed that the characteristic of the wave for carbon nanotubes conveying fluid is the normal dispersion. Moreover, implying viscoelastic foundation leads to increasing the wave frequencies.

  15. Electron and positron scattering from CF 3I molecules below 600 eV: a comparison with CF 3H

    NASA Astrophysics Data System (ADS)

    Kawada, Michihito K.; Sueoka, Osamu; Kimura, Mineo

    2000-11-01

    The total cross-sections (TCSs) for electron and positron scattering from CF 3I molecules have been studied experimentally. A theoretical analysis based on the continuum multiple-scattering (CMS) method has been performed to understand the origin of resonances and the elastic cross-sections. The present TCS for electron scattering is found to be larger by about 20% than that of T. Underwood-Lemons, D.C. Winkler, J.A. Tossel, J.H. Moore [J. Chem. Phys. 100 (1994) 9117] although the general shape agrees well in the entire energy studied. The difference in the cross-sections for CF 3I and CF 3H is explained by the sizes and the dipole moments of these molecules.

  16. The botanical activities of George Edward Post (1838-1909).

    PubMed

    Musselman, Lytton John

    2006-01-01

    George Edward Post wrote the first flora of the Middle East in English. His other botanical activities are less familiar. In addition to the flora, this paper discusses his teaching, fieldwork, contribution to Bible dictionaries, relations with the Boissier Herbarium in Geneva, establishment of the herbarium, and letters. Those letters are used here for the first time. Post corresponded with botanical luminaries of his day including Autran, Baker, Balfour, Barbey, Boissier, Bornmüller, Carruthers, Denslow, Haussknecht, Hooker, Schweinfurth, Thistleton-Dyer, Torrey, and Winkler. His long-term relationship with the herbarium at Geneva is highlighted. In addition, some of the lesser understood aspects of his life including chaplaincy during the American Civil War, and missionary to Syria are discussed.

  17. Microbial oceanography of anoxic oxygen minimum zones.

    PubMed

    Ulloa, Osvaldo; Canfield, Donald E; DeLong, Edward F; Letelier, Ricardo M; Stewart, Frank J

    2012-10-02

    Vast expanses of oxygen-deficient and nitrite-rich water define the major oxygen minimum zones (OMZs) of the global ocean. They support diverse microbial communities that influence the nitrogen economy of the oceans, contributing to major losses of fixed nitrogen as dinitrogen (N(2)) and nitrous oxide (N(2)O) gases. Anaerobic microbial processes, including the two pathways of N(2) production, denitrification and anaerobic ammonium oxidation, are oxygen-sensitive, with some occurring only under strictly anoxic conditions. The detection limit of the usual method (Winkler titrations) for measuring dissolved oxygen in seawater, however, is much too high to distinguish low oxygen conditions from true anoxia. However, new analytical technologies are revealing vanishingly low oxygen concentrations in nitrite-rich OMZs, indicating that these OMZs are essentially anoxic marine zones (AMZs). Autonomous monitoring platforms also reveal previously unrecognized episodic intrusions of oxygen into the AMZ core, which could periodically support aerobic metabolisms in a typically anoxic environment. Although nitrogen cycling is considered to dominate the microbial ecology and biogeochemistry of AMZs, recent environmental genomics and geochemical studies show the presence of other relevant processes, particularly those associated with the sulfur and carbon cycles. AMZs correspond to an intermediate state between two "end points" represented by fully oxic systems and fully sulfidic systems. Modern and ancient AMZs and sulfidic basins are chemically and functionally related. Global change is affecting the magnitude of biogeochemical fluxes and ocean chemical inventories, leading to shifts in AMZ chemistry and biology that are likely to continue well into the future.

  18. Microbial oceanography of anoxic oxygen minimum zones

    PubMed Central

    Ulloa, Osvaldo; Canfield, Donald E.; DeLong, Edward F.; Letelier, Ricardo M.; Stewart, Frank J.

    2012-01-01

    Vast expanses of oxygen-deficient and nitrite-rich water define the major oxygen minimum zones (OMZs) of the global ocean. They support diverse microbial communities that influence the nitrogen economy of the oceans, contributing to major losses of fixed nitrogen as dinitrogen (N2) and nitrous oxide (N2O) gases. Anaerobic microbial processes, including the two pathways of N2 production, denitrification and anaerobic ammonium oxidation, are oxygen-sensitive, with some occurring only under strictly anoxic conditions. The detection limit of the usual method (Winkler titrations) for measuring dissolved oxygen in seawater, however, is much too high to distinguish low oxygen conditions from true anoxia. However, new analytical technologies are revealing vanishingly low oxygen concentrations in nitrite-rich OMZs, indicating that these OMZs are essentially anoxic marine zones (AMZs). Autonomous monitoring platforms also reveal previously unrecognized episodic intrusions of oxygen into the AMZ core, which could periodically support aerobic metabolisms in a typically anoxic environment. Although nitrogen cycling is considered to dominate the microbial ecology and biogeochemistry of AMZs, recent environmental genomics and geochemical studies show the presence of other relevant processes, particularly those associated with the sulfur and carbon cycles. AMZs correspond to an intermediate state between two “end points” represented by fully oxic systems and fully sulfidic systems. Modern and ancient AMZs and sulfidic basins are chemically and functionally related. Global change is affecting the magnitude of biogeochemical fluxes and ocean chemical inventories, leading to shifts in AMZ chemistry and biology that are likely to continue well into the future. PMID:22967509

  19. Chandra Discovers Cosmic Cannonball

    NASA Astrophysics Data System (ADS)

    2007-11-01

    One of the fastest moving stars ever seen has been discovered with NASA's Chandra X-ray Observatory. This cosmic cannonball is challenging theories to explain its blistering speed. Astronomers used Chandra to observe a neutron star, known as RX J0822-4300, over a period of about five years. During that span, three Chandra observations clearly show the neutron star moving away from the center of the Puppis A supernova remnant. This remnant is the stellar debris field created during the same explosion in which the neutron star was created about 3700 years ago. Chandra X-ray Image of RX J0822-4300 in Puppis A Chandra X-ray Image of RX J0822-4300 in Puppis A By combining how far it has moved across the sky with its distance from Earth, astronomers determined the neutron star is moving at over 3 million miles per hour. At this rate, RX J0822-4300 is destined to escape from the Milky Way after millions of years, even though it has only traveled about 20 light years so far. "This star is moving at 3 million miles an hour, but it's so far away that the apparent motion we see in five years is less than the height of the numerals in the date on a penny, seen from the length of a football field," said Frank Winkler of Middlebury College in Vermont. "It's remarkable, and a real testament to the power of Chandra, that such a tiny motion can be measured." Labeled Image of RX J0822-4300 in Puppis A Labeled Image of RX J0822-4300 in Puppis A "Just after it was born, this neutron star got a one-way ticket out of the Galaxy," said co-author Robert Petre of NASA's Goddard Space Flight Center in Greenbelt, Md. "Astronomers have seen other stars being flung out of the Milky Way, but few as fast as this." So-called hypervelocity stars have been previously discovered shooting out of the Milky Way with speeds around one million miles per hour. One key difference between RX J0822-4300 and these other reported galactic escapees is the source of their speed. The hypervelocity stars are thought to have been ejected by interactions with the supermassive black hole in the Galaxy's center. CTIO Optical Images of Puppis A CTIO Optical Images of Puppis A This neutron star, by contrast, was flung into motion by the supernova that created Puppis A. The data suggest the explosion was lop-sided, kicking the neutron star in one direction and the debris from the explosion in the other. The supernova was precipitated when the core of a massive star imploded to form a neutron star. Computer simulations show that the infall of the outer layers of the star onto a neutron star releases an enormous amount of energy. As this energy propagates outward, it can reverse the infall and eject the outer layers of the star at speeds of millions of miles per hour. Due to the complexity of the flow, the ejection is not symmetric, leading to a rocket effect that propels the neutron star in the opposite direction. ROSAT X-ray ROSAT X-ray The breakneck speed of the Puppis A neutron star, plus an apparent lack of pulsations from it, is not easily explained by even the most sophisticated supernova explosion models. "The problem with discovering this cosmic cannonball is we aren't sure how to make the cannon powerful enough." said Winkler. "The high speed might be explained by an unusually energetic explosion, but the models are complicated and hard to apply to real explosions." Other recent work on RX J0822-4300 was published by C.Y. Hui and Wolfgang Becker, both from the Max Planck Institute for Extraterrestrial Physics in Munich, in the journal Astronomy and Astrophysics in late 2006. Using two of the three Chandra observations reported in the Winkler paper and a different analysis technique, the Hui group found a speed for RX J0822-4300 that is about two-thirds as fast, but with larger reported margins of error. The research by Winkler and Petre was published in the November 20 issue of The Astrophysical Journal. NASA's Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for the agency's Science Mission Directorate. The Smithsonian Astrophysical Observatory controls science and flight operations from the Chandra X-ray Center in Cambridge, Mass.

  20. Germanium: From Its Discovery to SiGe Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haller, E.E.

    2006-06-14

    Germanium, element No.32, was discovered in 1886 by Clemens Winkler. Its first broad application was in the form of point contact Schottky diodes for radar reception during WWII. The addition of a closely spaced second contact led to the first all-solid-state electronic amplifier device, the transistor. The relatively low bandgap, the lack of a stable oxide and large surface state densities relegated germanium to the number 2 position behind silicon. The discovery of the lithium drift process, which made possible the formation of p-i-n diodes with fully depletable i-regions several centimeters thick, led germanium to new prominence as the premiermore » gamma-ray detector. The development of ultra-pure germanium yielded highly stable detectors which have remained unsurpassed in their performance. New acceptors and donors were discovered and the electrically active role of hydrogen was clearly established several years before similar findings in silicon. Lightly doped germanium has found applications as far infrared detectors and heavily Neutron Transmutation Doped (NTD) germanium is used in thermistor devices operating at a few milliKelvin. Recently germanium has been rediscovered by the silicon device community because of its superior electron and hole mobility and its ability to induce strains when alloyed with silicon. Germanium is again a mainstream electronic material.« less

  1. Hydroelastic Oscillations of a Circular Plate, Resting on Winkler Foundation

    NASA Astrophysics Data System (ADS)

    Kondratov, D. V.; Mogilevich, L. I.; Popov, V. S.; Popova, A. A.

    2018-01-01

    The forced hydroelastic oscillations of a circular plate resting on elastic foundation are investigated. The oscillations are caused by a stamp vibration under interaction with a plate through a thin layer of viscous incompressible liquid. The axis-symmetric problem for the regime of the steady-state harmonic oscillations is considered. On the basis of hydroelasticity problem solution the laws of plate deflection and pressure in the liquid are found. The functions of the amplitudes deflection distribution and liquid pressure along the plate are constructed. The presented mathematical model provides for investigating viscous liquid layer interaction dynamics with a circular plate resting on an elastic foundation. The above-mentioned model makes it possible to define the plate oscillations resonance frequencies and the corresponding amplitudes of deflection and liquid pressure, as well.

  2. Nonlocal elasticity and shear deformation effects on thermal buckling of a CNT embedded in a viscoelastic medium

    NASA Astrophysics Data System (ADS)

    Zenkour, A. M.

    2018-05-01

    The thermal buckling analysis of carbon nanotubes embedded in a visco-Pasternak's medium is investigated. The Eringen's nonlocal elasticity theory, in conjunction with the first-order Donnell's shell theory, is used for this purpose. The surrounding medium is considered as a three-parameter viscoelastic foundation model, Winkler-Pasternak's model as well as a viscous damping coefficient. The governing equilibrium equations are obtained and solved for carbon nanotubes subjected to different thermal and mechanical loads. The effects of nonlocal parameter, radius and length of nanotube, and the three foundation parameters on the thermal buckling of the nanotube are studied. Sample critical buckling loads are reported and graphically illustrated to check the validity of the present results and to present benchmarks for future comparisons.

  3. Solubility of oxygen in a seawater medium in equilibrium with a high-pressure oxy-helium atmosphere.

    PubMed

    Taylor, C D

    1979-06-01

    The molar oxygen concentration in a seawater medium in equilibrium with a high-pressure oxygen-helium atmosphere was measured directly in pressurized subsamples, using a modified version of the Winkler oxygen analysis. At a partial pressure of oxygen of 1 atm or less, its concentration in the aqueous phase was adequately described by Henry's Law at total pressures up to 600 atm. This phenomenon, which permits a straightforward determination of dissolved oxygen within hyperbaric systems, resulted from pressure-induced compensatory alterations in the Henry's Law variables rather than from a true obedience to the Ideal Gas Law. If the partial pressure of a gas contributes significantly to the hydrostatic pressure, Henry's Law is no longer adequate for determining its solubility within the compressed medium.

  4. Wave propagation of carbon nanotubes embedded in an elastic medium

    NASA Astrophysics Data System (ADS)

    Natsuki, Toshiaki; Hayashi, Takuya; Endo, Morinobu

    2005-02-01

    This paper presents analytical models of wave propagation in single- and double-walled carbon nanotubes, as well as nanotubes embedded in an elastic matrix. The nanotube structures are treated within the multilayer thin shell approximation with the elastic properties taken to be those of the graphene sheet. The double-walled nanotubes are coupled together through the van der Waals force between the inner and outer nanotubes. For carbon nanotubes embedded in an elastic matrix, the surrounding elastic medium can be described by a Winkler model. Tube wave propagation of both symmetrical and asymmetrical modes can be analyzed based on the present elastic continuum model. It is found that the asymmetrical wave behavior of single- and double-walled nanotubes is significantly different. The behavior is also different from that in the surrounding elastic medium.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarman, Kristin H.; Wahl, Karen L.

    The concept of rapid microorganism identification using matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) dates back to the mid-1990’s. Prior to 1998, researchers relied on visual inspection in an effort to demonstrate feasibility of MALDI-MS for bacterial identification (Holland, Wilkes et al. 1996), (Krishnamurthy and Ross 1996), (Claydon, Davey et al. 1996). In general, researchers in these early studies visually compared the biomarker intensity profiles between different organisms and between replicates of the same organism to show that MALDI signatures are unique and reproducible. Manual tabulation and comparison of potential biomarker mass values observed for different organisms was used by numerousmore » researchers to qualitatively characterize microorganisms using MALDI-MS spectra (e.g. (Lynn, Chung et al. 1999), (Winkler, Uher et al. 1999), (Ryzhov, Hathout et al. 2000), (Nilsson 1999)).« less

  6. A century of Dutch neurology.

    PubMed

    Koehler, P J; Bruyn, G W; Moffie, D

    1998-12-01

    The Netherlands Society of Neurology evolved from the Society of Psychiatry founded in 1871. The name was changed into Netherlands Society of Psychiatry and Neurology (NSPN) in 1897. In the same year, the word neurology was also added to the name of the journal. The Society steadily blossomed, but in 1909 the first signs of dissatisfaction occurred: the Amsterdam Neurologists Society was founded. A few split-offs would follow. The number of members of the NSPN increased from 205 in 1920 to 585 in 1960. In the early 1960s, the Society was reorganised and would consist of two sections, one for psychiatry and one for neurology. However, this would not last, as a full separation was established in 1974. For several reasons, the name of the journal was changed four times until it assumed its present name in 1974. The 100th volume of CNN was not published, as expected. in 1996, but in 1998, because of two skipped publication years, one during WWII and another in the 1970s. During the last decades of the nineteenth century, teaching of neurology was mostly given within the frame of psychiatry, following the German tradition of 'brainpsychiatry' (organic or biologic psychiatry). The first official chair of psychiatry was founded at Utrecht, 1893 (Winkler). In Amsterdam, private teachers such as Delprat taught 'electro-therapy and nervous diseases' since the 1880s. The first extraordinary chair of neurology and electrotherapy was founded for his successor, Wertheim Salomonson in 1899. The first university clinic for psychiatry and neurology started at the Amsterdam Municipal University, when Winkler became professor of psychiatry and neurology in Amsterdam in 1896. Around the turn of the century, chairs of psychiatry and neurology were also founded in Groningen and Leiden. Separate chairs for neurology and psychiatry appeared in Amsterdam in 1923 and in Utrecht in 1936. Following an initiative of Brouwer, the first neurological university clinic opened its doors in Amsterdam in 1929. In the 20th century, a number specialised peripheral neurological clinics and epilepsy institutes were founded. In 1909, the the Central Institute for Brain Research was established in Amsterdam.

  7. Indonesian name matching using machine learning supervised approach

    NASA Astrophysics Data System (ADS)

    Alifikri, Mohamad; Arif Bijaksana, Moch.

    2018-03-01

    Most existing name matching methods are developed for English language and so they cover the characteristics of this language. Up to this moment, there is no specific one has been designed and implemented for Indonesian names. The purpose of this thesis is to develop Indonesian name matching dataset as a contribution to academic research and to propose suitable feature set by utilizing combination of context of name strings and its permute-winkler score. Machine learning classification algorithms is taken as the method for performing name matching. Based on the experiments, by using tuned Random Forest algorithm and proposed features, there is an improvement of matching performance by approximately 1.7% and it is able to reduce until 70% misclassification result of the state of the arts methods. This improving performance makes the matching system more effective and reduces the risk of misclassified matches.

  8. Influence of foundation mass and surface roughness on dynamic response of beam on dynamic foundation subjected to the moving load

    NASA Astrophysics Data System (ADS)

    Tran Quoc, Tinh; Khong Trong, Toan; Luong Van, Hai

    2018-04-01

    In this paper, Improved Moving Element Method (IMEM) is used to analyze the dynamic response of Euler-Bernoulli beam structures on the dynamic foundation model subjected to the moving load. The effects of characteristic foundation model parameters such as Winkler stiffness, shear layer based on the Pasternak model, viscoelastic dashpot and characteristic parameter of mass on foundation. Beams are modeled by moving elements while the load is fixed. Based on the principle of the publicly virtual balancing and the theory of moving element method, the motion differential equation of the system is established and solved by means of the numerical integration based on the Newmark algorithm. The influence of mass on foundation and the roughness of the beam surface on the dynamic response of beam are examined in details.

  9. Organic Acid Production by Filamentous Fungi

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magnuson, Jon K.; Lasure, Linda L.

    Many of the commercial production processes for organic acids are excellent examples of fungal biotechnology. However, unlike penicillin, the organic acids have had a less visible impact on human well-being. Indeed, organic acid fermentations are often not even identified as fungal bioprocesses, having been overshadowed by the successful deployment of the β-lactam processes. Yet, in terms of productivity, fungal organic acid processes may be the best examples of all. For example, commercial processes using Aspergillus niger in aerated stirred-tank-reactors can convert glucose to citric acid with greater than 80% efficiency and at final concentrations in hundreds of grams per liter.more » Surprisingly, this phenomenal productivity has been the object of relatively few research programs. Perhaps a greater understanding of this extraordinary capacity of filamentous fungi to produce organic acids in high concentrations will allow greater exploitation of these organisms via application of new knowledge in this era of genomics-based biotechnology. In this chapter, we will explore the biochemistry and modern genetic aspects of the current and potential commercial processes for making organic acids. The organisms involved, with a few exceptions, are filamentous fungi, and this review is limited to that group. Although yeasts including Saccharomyces cerevisiae, species of Rhodotorula, Pichia, and Hansenula are important organisms in fungal biotechnology, they have not been significant for commercial organic acid production, with one exception. The yeast, Yarrowia lipolytica, and related yeast species, may be in use commercially to produce citric acid (Lopez-Garcia, 2002). Furthermore, in the near future engineered yeasts may provide new commercial processes to make lactic acid (Porro, Bianchi, Ranzi, Frontali, Vai, Winkler, & Alberghina, 2002). This chapter is divided into two parts. The first contains a review of the commercial aspects of current and potential large-scale processes for fungal organic acid production. The second presents a detailed review of current knowledge of the biochemistry and genetic regulation of organic acid biosynthesis. The organic acids considered are limited to polyfunctional acids containing one or more carboxyl groups, hydroxyl groups, or both, that are closely tied to central metabolic pathways. A major objective of the review is to link the biochemistry of organic acid production to the available genomic data.« less

  10. Numerical study of the stress-strain state of reinforced plate on an elastic foundation by the Bubnov-Galerkin method

    NASA Astrophysics Data System (ADS)

    Beskopylny, Alexey; Kadomtseva, Elena; Strelnikov, Grigory

    2017-10-01

    The stress-strain state of a rectangular slab resting on an elastic foundation is considered. The slab material is isotropic. The slab has stiffening ribs that directed parallel to both sides of the plate. Solving equations are obtained for determining the deflection for various mechanical and geometric characteristics of the stiffening ribs which are parallel to different sides of the plate, having different rigidity for bending and torsion. The calculation scheme assumes an orthotropic slab having different cylindrical stiffness in two mutually perpendicular directions parallel to the reinforcing ribs. An elastic foundation is adopted by Winkler model. To determine the deflection the Bubnov-Galerkin method is used. The deflection is taken in the form of an expansion in a series with unknown coefficients by special polynomials, which are a combination of Legendre polynomials.

  11. Study of the bending vibration characteristic of phononic crystals beam-foundation structures by Timoshenko beam theory

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Ni, Zhi-Qiang; Jiang, Lin-Hua; Han, Lin; Kang, Xue-Wei

    2015-07-01

    Vibration problems wildly exist in beam-foundation structures. In this paper, finite periodic composites inspired by the concept of ideal phononic crystals (PCs), as well as Timoshenko beam theory (TBT), are proposed to the beam anchored on Winkler foundation. The bending vibration band structure of the PCs Timoshenko beam-foundation structure is derived from the modified transfer matrix method (MTMM) and Bloch's theorem. Then, the frequency response of the finite periodic composite Timoshenko beam-foundation structure by the finite element method (FEM) is performed to verify the above theoretical deduction. Study shows that the Timoshenko beam-foundation structure with periodic composites has wider attenuation zones compared with homogeneous ones. It is concluded that TBT is more available than Euler beam theory (EBT) in the study of the bending vibration characteristic of PCs beam-foundation structures with different length-to-height ratios.

  12. Hubble Sees Stars and a Stripe in Celestial Fireworks

    NASA Image and Video Library

    2017-12-08

    Release date: July 1, 2008 This image is a composite of visible (or optical), radio, and X-ray data of the full shell of the supernova remnant from SN 1006. The radio data show much of the extent that the X-ray image shows. In contrast, only a small linear filament in the northwest corner of the shell is visible in the optical data. The object has an angular size of roughly 30 arcminutes (0.5 degree, or about the size of the full moon), and a physical size of 60 light-years (18 parsecs) based on its distance of nearly 7,000 light-years. The small green box along the bright filament at the top of the image corresponds to the dimensions of the Hubble release image. The optical data was obtained at the University of Michigan's 0.9-meter Curtis Schmidt telescope at the National Science Foundation's Cerro Tololo Inter-American Observatory (CTIO) near La Serena, Chile. H-alpha, continuum-subtracted data were provided by F. Winkler (Middlebury COllege) et al. The X-ray data were acquired from the Chandra X-ray Observatory's AXAF CCD Imaging Spectrometer (ACIS) at 0.5-3keV, and were provided by J. Hughes (Rutgers University) et al. The radio data, supplied by K. Dyer (NRAO, Socorro) et al., were a composite from the National Radio Astronomy Observatory's Very Large Array (NRAO/VLA) in Socorro, New Mexico, along with the Green Bank Telescope (GBT) in Green Bank, West Virginia. Data of the supernova remnant were blended on a visible-light stellar background created using the Digitized Sky Survey's Anglo-Australian Observatory (AAO2) blue and red plates. Photo Credit: NASA, ESA, and Z. Levay (STScI) Science Credit: Radio: NRAO/AUI/NSF GBT+VLA 1.4 GHz mosaic (Dyer, Maddalena and Cornwell, NRAO); X-ray: NASA/CXC/Rutgers/G. Cassam-Chenai and J. Hughes et al.; Optical: F.Winkler/Middlebury College and NOAO/AURA/NSF; and DSS To learn more about the Hubble Space Telescope go here: www.nasa.gov/mission_pages/hubble/main/index.html NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe. Follow us on Twitter Join us on Facebook

  13. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    PubMed Central

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod communities and vegetation and soil properties, but no significant association among belowground arthropod communities and environmental factors. Discussion Our results suggest context-dependent influences of stochastic and deterministic community assembly processes across different fractions of a spatially co-occurring ground-dwelling arthropod community following disturbance. This variation in assembly may be linked to contrasting ecological strategies and dispersal rates within above- and below-ground communities. Our findings add to a growing body of evidence indicating concurrent influences of stochastic and deterministic processes in community assembly, and highlight the need to consider potential variation across different fractions of biotic communities when testing community ecology theory and considering conservation strategies. PMID:27761333

  14. Stimulus-specific adaptation and deviance detection in the inferior colliculus

    PubMed Central

    Ayala, Yaneri A.; Malmierca, Manuel S.

    2013-01-01

    Deviancy detection in the continuous flow of sensory information into the central nervous system is of vital importance for animals. The task requires neuronal mechanisms that allow for an efficient representation of the environment by removing statistically redundant signals. Recently, the neuronal principles of auditory deviance detection have been approached by studying the phenomenon of stimulus-specific adaptation (SSA). SSA is a reduction in the responsiveness of a neuron to a common or repetitive sound while the neuron remains highly sensitive to rare sounds (Ulanovsky et al., 2003). This phenomenon could enhance the saliency of unexpected, deviant stimuli against a background of repetitive signals. SSA shares many similarities with the evoked potential known as the “mismatch negativity,” (MMN) and it has been linked to cognitive process such as auditory memory and scene analysis (Winkler et al., 2009) as well as to behavioral habituation (Netser et al., 2011). Neurons exhibiting SSA can be found at several levels of the auditory pathway, from the inferior colliculus (IC) up to the auditory cortex (AC). In this review, we offer an account of the state-of-the art of SSA studies in the IC with the aim of contributing to the growing interest in the single-neuron electrophysiology of auditory deviance detection. The dependence of neuronal SSA on various stimulus features, e.g., probability of the deviant stimulus and repetition rate, and the roles of the AC and inhibition in shaping SSA at the level of the IC are addressed. PMID:23335883

  15. Effects of ungulate disturbance and weather variation on Pediocactus winkleri: insights from long-term monitoring

    USGS Publications Warehouse

    Clark, Deborah J.; Clark, Thomas O.; Duniway, Michael C.; Flagg, Cody B.

    2015-01-01

    Population dynamics and effects of large ungulate disturbances on Winkler cactus (Pediocactus winkleri K.D. Heil) were documented annually over a 20-year time span at one plot within Capitol Reef National Park, Utah. This cactus species was federally listed as threatened in 1998. The study began in 1995 to gain a better understanding of life history aspects and threats to this species. Data were collected annually in early spring and included diameter, condition, reproductive structures, mortality, recruitment, and disturbance by large ungulates. We used odds ratio and probability model analyses to determine effects of large ungulate trampling and weather on these cacti. During the study, plot population declined by 18%, with trampling of cactus, low precipitation, and cold spring temperatures implicated as causal factors. Precipitation and temperature affected flowering, mortality, and recruitment. Large ungulate disturbances increased mortality and reduced the probability of flowering. These results suggest that large ungulate disturbances and recent climate regimes have had an adverse impact on long-term persistence of this cactus.

  16. The complex variable reproducing kernel particle method for bending problems of thin plates on elastic foundations

    NASA Astrophysics Data System (ADS)

    Chen, L.; Cheng, Y. M.

    2018-07-01

    In this paper, the complex variable reproducing kernel particle method (CVRKPM) for solving the bending problems of isotropic thin plates on elastic foundations is presented. In CVRKPM, one-dimensional basis function is used to obtain the shape function of a two-dimensional problem. CVRKPM is used to form the approximation function of the deflection of the thin plates resting on elastic foundation, the Galerkin weak form of thin plates on elastic foundation is employed to obtain the discretized system equations, the penalty method is used to apply the essential boundary conditions, and Winkler and Pasternak foundation models are used to consider the interface pressure between the plate and the foundation. Then the corresponding formulae of CVRKPM for thin plates on elastic foundations are presented in detail. Several numerical examples are given to discuss the efficiency and accuracy of CVRKPM in this paper, and the corresponding advantages of the present method are shown.

  17. The integration of bioclimatic indices in an objective probabilistic model for establishing and mapping viticulture suitability in a region

    NASA Astrophysics Data System (ADS)

    Moral García, Francisco J.; Rebollo, Francisco J.; Paniagua, Luis L.; García, Abelardo

    2014-05-01

    Different bioclimatic indices have been proposed to determine the wine suitability in a region. Some of them are related to the air temperature, but the hydric component of climate should also be considered which, in turn, is influenced by the precipitation during the different stages of the grapevine growing and ripening periods. In this work we propose using the information obtained from 10 bioclimatic indices and variables (heliothermal index, HI, cool night index, CI, dryness index, DI, growing season temperature, GST, the Winkler index, WI, September mean thermal amplitude, MTA, annual precipitation, AP, precipitation during flowering, PDF, precipitation before flowering, PBF, and summer precipitation, SP) as inputs in an objective and probabilistic model, the Rasch model, with the aim of integrating the individual effects of them, obtaining the climate data that summarize all main bioclimatic indices which could influence on wine suitability, and utilize the Rasch measures to generate homogeneous climatic zones. The use of the Rasch model to estimate viticultural suitability constitutes a new application of great practical importance, enabling to rationally determine locations in a region where high viticultural potential exists and establishing a ranking of the bioclimatic indices or variables which exerts an important influence on wine suitability in a region. Furthermore, from the measures of viticultural suitability at some locations, estimates can be computed using a geostatistical algorithm, and these estimates can be utilized to map viticultural suitability potential in a region. To illustrate the process, an application to Extremadura, southewestern Spain, is shown. Keywords: Rasch model, bioclimatic indices, GIS.

  18. Integration of climatic indices in an objective probabilistic model for establishing and mapping viticultural climatic zones in a region

    NASA Astrophysics Data System (ADS)

    Moral, Francisco J.; Rebollo, Francisco J.; Paniagua, Luis L.; García, Abelardo; Honorio, Fulgencio

    2016-05-01

    Different climatic indices have been proposed to determine the wine suitability in a region. Some of them are related to the air temperature, but the hydric component of climate should also be considered which, in turn, is influenced by the precipitation during the different stages of the grapevine growing and ripening periods. In this study, we propose using the information obtained from ten climatic indices [heliothermal index (HI), cool night index (CI), dryness index (DI), growing season temperature (GST), the Winkler index (WI), September mean thermal amplitude (MTA), annual precipitation (AP), precipitation during flowering (PDF), precipitation before flowering (PBF), and summer precipitation (SP)] as inputs in an objective and probabilistic model, the Rasch model, with the aim of integrating the individual effects of them, obtaining the climate data that summarize all main climatic indices, which could influence on wine suitability from a climate viewpoint, and utilizing the Rasch measures to generate homogeneous climatic zones. The use of the Rasch model to estimate viticultural climatic suitability constitutes a new application of great practical importance, enabling to rationally determine locations in a region where high viticultural potential exists and establishing a ranking of the climatic indices which exerts an important influence on wine suitability in a region. Furthermore, from the measures of viticultural climatic suitability at some locations, estimates can be computed using a geostatistical algorithm, and these estimates can be utilized to map viticultural climatic zones in a region. To illustrate the process, an application to Extremadura, southwestern Spain, is shown.

  19. Does it always feel good to get what you want? Young children differentiate between material and wicked desires.

    PubMed

    Smith, Craig E; Warneken, Felix

    2014-03-01

    One line of research on children's attributions of guilt suggests that 3-year-olds attribute negative emotion to self-serving victimizers, slightly older children attribute happiness, and with increasing age, attributions become negative again (i.e., a three-step model; Yuill et al., 1996, Br. J. Dev. Psychol., 14, 457). Another line of research provides reason to expect that 3-year-olds may be predisposed to view self-serving moral transgression as leading to positive emotion; this is a linear developmental model in which emotion attributions to transgressors become increasingly negative over the course of childhood (e.g., Nunner-Winkler & Sodian, 1988, Child Dev., 59, 1323). However, key differences in methodology make it difficult to compare across these findings. The present study was designed to address this problem. We asked how 3- to 9-year-old children (n = 111) reason about transgression scenarios that involve satisfying wicked desires (wanting to cause harm and doing so successfully) versus material desires (wanting an object and getting it successfully via harmful behaviour). Three-year-old children reasoned differently about desire and emotion across these two types of transgressions, attributing negative emotion in the case of wicked desires and positive emotion in the case of material desires. This pattern of emotion attribution by young children provides new information about how young children process information about desires and emotions in the moral domain, and it bridges a gap in the existing literature on this topic. © 2013 The British Psychological Society.

  20. Prediction of Very High Reynolds Number Compressible Skin Friction

    NASA Technical Reports Server (NTRS)

    Carlson, John R.

    1998-01-01

    Flat plate skin friction calculations over a range of Mach numbers from 0.4 to 3.5 at Reynolds numbers from 16 million to 492 million using a Navier Stokes method with advanced turbulence modeling are compared with incompressible skin friction coefficient correlations. The semi-empirical correlation theories of van Driest; Cope; Winkler and Cha; and Sommer and Short T' are used to transform the predicted skin friction coefficients of solutions using two algebraic Reynolds stress turbulence models in the Navier-Stokes method PAB3D. In general, the predicted skin friction coefficients scaled well with each reference temperature theory though, overall the theory by Sommer and Short appeared to best collapse the predicted coefficients. At the lower Reynolds number 3 to 30 million, both the Girimaji and Shih, Zhu and Lumley turbulence models predicted skin-friction coefficients within 2% of the semi-empirical correlation skin friction coefficients. At the higher Reynolds numbers of 100 to 500 million, the turbulence models by Shih, Zhu and Lumley and Girimaji predicted coefficients that were 6% less and 10% greater, respectively, than the semi-empirical coefficients.

  1. Temperature Dependence in Homogeneous and Heterogeneous Nucleation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw R. L.; Winkler, P. M.; Wagner, P. E.

    2017-08-01

    Heterogeneous nucleation on stable (sub-2 nm) nuclei aids the formation of atmospheric cloud condensation nuclei (CCN) by circumventing or reducing vapor pressure barriers that would otherwise limit condensation and new particle growth. Aerosol and cloud formation depend largely on the interaction between a condensing liquid and the nucleating site. A new paper published this year reports the first direct experimental determination of contact angles as well as contact line curvature and other geometric properties of a spherical cap nucleus at nanometer scale using measurements from the Vienna Size Analyzing Nucleus Counter (SANC) (Winkler et al., 2016). For water nucleating heterogeneouslymore » on silver oxide nanoparticles we find contact angles around 15 degrees compared to around 90 degrees for the macroscopically measured equilibrium angle for water on bulk silver. The small microscopic contact angles can be attributed via the generalized Young equation to a negative line tension that becomes increasingly dominant with increasing curvature of the contact line. These results enable a consistent theoretical description of heterogeneous nucleation and provide firm insight to the wetting of nanosized objects.« less

  2. A review of biomaterials in bone defect healing, remaining shortcomings and future opportunities for bone tissue engineering

    PubMed Central

    Winkler, T.; Sass, F. A.; Schmidt-Bleek, K.

    2018-01-01

    Despite its intrinsic ability to regenerate form and function after injury, bone tissue can be challenged by a multitude of pathological conditions. While innovative approaches have helped to unravel the cascades of bone healing, this knowledge has so far not improved the clinical outcomes of bone defect treatment. Recent findings have allowed us to gain in-depth knowledge about the physiological conditions and biological principles of bone regeneration. Now it is time to transfer the lessons learned from bone healing to the challenging scenarios in defects and employ innovative technologies to enable biomaterial-based strategies for bone defect healing. This review aims to provide an overview on endogenous cascades of bone material formation and how these are transferred to new perspectives in biomaterial-driven approaches in bone regeneration. Cite this article: T. Winkler, F. A. Sass, G. N. Duda, K. Schmidt-Bleek. A review of biomaterials in bone defect healing, remaining shortcomings and future opportunities for bone tissue engineering: The unsolved challenge. Bone Joint Res 2018;7:232–243. DOI: 10.1302/2046-3758.73.BJR-2017-0270.R1.

  3. Study for Nuclear Structures of 22-35Na Isotopes via Measurements of Reaction Cross Sections

    NASA Astrophysics Data System (ADS)

    Suzuki, Shinji

    2014-09-01

    T. Ohtsubo, M. Nagashima, T. Ogura, Y. Shimbara (Grad. Sch. of Sc., Niigata Univ.), M.Takechi, H. Geissel, M. Winkler (GSI), D. Nishimura, T. Sumikama (Dept. of Phys., Tokyo Univ. of Sc.), M. Fukuda, M. Mihara, H. Uenishi (Dept. of Phys., Osaka Univ.), T. Kuboki, T. Suzuki, T. Yamaguchi, H. Furuki, C. S. Lee, K. Sato (Dept. of Phys., Saitama Univ.), A. Ozawa, H. Ohnishi, T. Moriguchi, S. Fukuda, Y. Ishibashi, D. Nagae, R. Nishikiori, T. Niwa (Inst. of Phys., Univ. of Tsukuba), N. Aoi (RCNP), Rui-Jiu Chen, N. Inabe, D. Kameda, T. Kubo, M. Lantz, T. Ohnishi, K. Okumura, H. Sakurai, H. Suzuki, H. Takeda, S. Takeuchi, K. Tanaka, Y. Yanagisawa (RIKEN), De-Qing Fang, Yu-Gang Ma (SINAP), T. Izumikawa (RI Ctr., Niigata Univ.), and S. Momota (Fac. of Engn., Kochi Univ. of Tech.) Reaction cross sections (σR) for 22-35Na isotopes have been measured at around 240 MeV/nucleon. The σR for 22-35Na were measured for the first time. Enhancement in cross sections is clearly observed from the systematics for stable nuclei, for isotopes with large mass numbers. These enhancement can be mainly ascribed to the nuclear deformation. We will discuss the nuclear structure (neutron skin, nuclear shell structure) for neutron-excess Na isotopes. T. Ohtsubo, M. Nagashima, T. Ogura, Y. Shimbara (Grad. Sch. of Sc., Niigata Univ.), M.Takechi, H. Geissel, M. Winkler (GSI), D. Nishimura, T. Sumikama (Dept. of Phys., Tokyo Univ. of Sc.), M. Fukuda, M. Mihara, H. Uenishi (Dept. of Phys., Osaka Univ.), T. Kuboki, T. Suzuki, T. Yamaguchi, H. Furuki, C. S. Lee, K. Sato (Dept. of Phys., Saitama Univ.), A. Ozawa, H. Ohnishi, T. Moriguchi, S. Fukuda, Y. Ishibashi, D. Nagae, R. Nishikiori, T. Niwa (Inst. of Phys., Univ. of Tsukuba), N. Aoi (RCNP), Rui-Jiu Chen, N. Inabe, D. Kameda, T. Kubo, M. Lantz, T. Ohnishi, K. Okumura, H. Sakurai, H. Suzuki, H. Takeda, S. Takeuchi, K. Tanaka, Y. Yanagisawa (RIKEN), De-Qing Fang, Yu-Gang Ma (SINAP), T. Izumikawa (RI Ctr., Niigata Univ.), and S. Momota (Fac. of Engn., Kochi Univ. of Tech.) Reaction cross sections (σR) for 22-35Na isotopes have been measured at around 240 MeV/nucleon. The σR for 22-35Na were measured for the first time. Enhancement in cross sections is clearly observed from the systematics for stable nuclei, for isotopes with large mass numbers. These enhancement can be mainly ascribed to the nuclear deformation. We will discuss the nuclear structure (neutron skin, nuclear shell structure) for neutron-excess Na isotopes. JSPS KAKENHI Grant Number 24244024.

  4. Variability on the Hypoxic Conditions in the Northwestern Region of the Baja California Peninsula

    NASA Astrophysics Data System (ADS)

    Bustos-Serrano, H.

    2015-12-01

    The NW region of the Baja California peninsula in México is dominated by the California Current System (CCS). Dissolved oxygen (DO) is a key variable in water bodies because it is considered as a health in biological processes. Hypoxic conditions (DO 60 to 120 μmol kg-1) occur naturally in large areas of the ocean. In the Eastern Pacific, the DO can be altered by eutrophication, derived from anthropogenic activity, especially in shallow and enclosed seas. Fluctuations in the conditions of hypoxia zones may have significant ecological and economic impact. It is of interest in assessing whether hypoxic conditions in the vicinity of Bahia de Todos Santos (BTS) and Coronado Islands in México are altered by anthropogenic activity (Figs. 1 and 2 respectively). For the present study, we worked with data collected from oceanographic expeditions during the period October 2010 to June 2015. The DO was determined using a CTD (SBE Model 25) and by sea water collection with hydrographic bottles using a modification of the Winkler method. The signs of hypoxia are evident in the area near BTS and in the vicinity of Coronado´s Islands, mainly on locations between the Todos Santos Islands and the peninsula of Punta Banda, which shows that the hypoxic zone begins to occur in shallow water between 50-200 m depth. This particular area corresponds to the point where the Mexican Navy determined as a site for dredging materials from the ports of Ensenada and El Sauzal, it is possible that the anthropogenic activity alters the natural conditions of hypoxia in the area to enlarge. In June 2012 for the first time in that region we obtained sediment samples below 700 m depth, which are mixed terrigenous clastic and oceanic sediments.

  5. New records of ant species from Yunnan, China

    PubMed Central

    Liu, Cong; Guénard, Benoit; Garcia, Francisco Hita; Yamane, Seiki; Blanchard, Benjamin; Yang, Da-Rong; Economo, Evan

    2015-01-01

    Abstract As with many other regions of the world, significant collecting, curation, and taxonomic efforts will be needed to complete the inventory of China’s ant fauna. This is especially true for the highly diverse tropical regions in the south of the country, where moist tropical forests harbor high species richness typical of the Southeast Asian region. We inventoried ants in the Xingshuangbanna prefecture, Yunnan, in June 2013, using a variety of methods including Winkler extraction and hand collection to sample ant diversity. We identified 213 species/morphospecies of ants from 10 subfamilies and 61 genera. After identification of 148 valid species of the 213 total species collected, 40 species represent new records for Yunnan province and 17 species are newly recorded for China. This increases the total number of named ant species in Yunnan and China to 447 and 951 respectively. The most common species collected were Brachyponera luteipes and Vollenhovia emeryi. Only one confirmed exotic species Strumigenys membranifera, was collected, although several others were potentially introduced by humans. These results highlight the high biodiversity value of the region, but also underscore how much work remains to fully document the native myrmecofauna. PMID:25685004

  6. Response analysis of a nuclear containment structure with nonlinear soil-structure interaction under bi-directional ground motion

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh; Raychowdhury, Prishati; Gundlapalli, Prabhakar

    2015-06-01

    Design of critical facilities such as nuclear power plant requires an accurate and precise evaluation of seismic demands, as any failure of these facilities poses immense threat to the community. Design complexity of these structures reinforces the necessity of a robust 3D modeling and analysis of the structure and the soil-foundation interface. Moreover, it is important to consider the multiple components of ground motion during time history analysis for a realistic simulation. Present study is focused on investigating the seismic response of a nuclear containment structure considering nonlinear Winkler-based approach to model the soil-foundation interface using a distributed array of inelastic springs, dashpots and gap elements. It is observed from this study that the natural period of the structure increases about 10 %, whereas the force demands decreases up to 24 % by considering the soil-structure interaction. Further, it is observed that foundation deformations, such as rotation and sliding are affected by the embedment ratio, indicating an increase of up to 56 % in these responses for a reduction of embedment from 0.5 to 0.05× the width of the footing.

  7. "The orang lives almost next door" the correspondence between John Fulton (New Haven) and Willem Verhaart (Java).

    PubMed

    Koehler, Peter

    2006-03-01

    Between 1937 and 1959 John Fulton (1899-1960), Sterling Professor of Physiology at Yale University (New Haven) and Willem Verhaart (1889-1983), neuropsychiatrist at Batavia Medical School (Java, Dutch East Indies) corresponded on neuroanatomical topics. Verhaart had easy access to primate brains in Batavia and stayed at Fulton's lab as a Rockefeller fellow (1938-1939), learning techniques of surgery and histology of the primate brain in order to apply it in his own lab. The correspondence relates of their undertakings in research, the preparations for Verhaart's stay in New Haven, the failure of subsequent research plans because of World War II, the camp experiences in Asia by Verhaart, the period of restoration after the war, helped by Fulton, and the political changes (independence) in Indonesia that finally lead to Verhaart's return to the Netherlands in 1950, where he became professor of histology and Director of the Neurological Institute at Leiden University. The correspondence shows how neuroscientists from different parts of the world cooperated. Moreover it is an example of the gradual change from a German (like his teacher Winkler) to an Anglo-American orientation in medical science that started in the beginning of the nineteenth century.

  8. The surrounding landscape influences the diversity of leaf-litter ants in riparian cloud forest remnants.

    PubMed

    García-Martínez, Miguel Á; Valenzuela-González, Jorge E; Escobar-Sarria, Federico; López-Barrera, Fabiola; Castaño-Meneses, Gabriela

    2017-01-01

    Riparian vegetation is a distinctive and ecologically important element of landscapes worldwide. However, the relative influence of the surrounding landscape on the conservation of the biodiversity of riparian remnants in human-modified tropical landscapes is poorly understood. We studied the surrounding landscape to evaluate its influence on leaf-litter-ant alpha and beta diversity in riparian remnants in the tropical montane cloud forest region of central Veracruz, Mexico. Sampling was carried out in 12 sites with riparian vegetation during both rainy (2011) and dry (2012) seasons. Ten leaf-litter samples were collected along a 100-m transect per site and processed with Berlese-Tullgren funnels and Winkler sacks. Using remotely-sensed and ground-collected data, we characterized the landscape around each site according to nine land cover types and computed metrics of landscape composition and configuration. We collected a total of 8,684 ant individuals belonging to 53 species, 22 genera, 11 tribes, and 7 subfamilies. Species richness and the diversity of Shannon and Simpson increased significantly in remnants immersed in landscapes with a high percentage of riparian land cover and a low percentage of land covers with areas reforested with Pinus, cattle pastures, and human settlements and infrastructure. The composition of ant assemblages was a function of the percentage of riparian land cover in the landscape. This study found evidence that leaf-litter ants, a highly specialized guild of arthropods, are mainly impacted by landscape composition and the configuration of the focal remnant. Maintaining or improving the surrounding landscape quality of riparian vegetation remnants can stimulate the movement of biodiversity among forest and riparian remnants and foster the provision of ecosystem services by these ecosystems. Effective outcomes may be achieved by considering scientific knowledge during the early stages of riparian policy formulation, in addition to integrating riparian management strategies with broader environmental planning instruments.

  9. The surrounding landscape influences the diversity of leaf-litter ants in riparian cloud forest remnants

    PubMed Central

    Valenzuela-González, Jorge E.; Escobar-Sarria, Federico; López-Barrera, Fabiola; Castaño-Meneses, Gabriela

    2017-01-01

    Riparian vegetation is a distinctive and ecologically important element of landscapes worldwide. However, the relative influence of the surrounding landscape on the conservation of the biodiversity of riparian remnants in human-modified tropical landscapes is poorly understood. We studied the surrounding landscape to evaluate its influence on leaf-litter-ant alpha and beta diversity in riparian remnants in the tropical montane cloud forest region of central Veracruz, Mexico. Sampling was carried out in 12 sites with riparian vegetation during both rainy (2011) and dry (2012) seasons. Ten leaf-litter samples were collected along a 100-m transect per site and processed with Berlese-Tullgren funnels and Winkler sacks. Using remotely-sensed and ground-collected data, we characterized the landscape around each site according to nine land cover types and computed metrics of landscape composition and configuration. We collected a total of 8,684 ant individuals belonging to 53 species, 22 genera, 11 tribes, and 7 subfamilies. Species richness and the diversity of Shannon and Simpson increased significantly in remnants immersed in landscapes with a high percentage of riparian land cover and a low percentage of land covers with areas reforested with Pinus, cattle pastures, and human settlements and infrastructure. The composition of ant assemblages was a function of the percentage of riparian land cover in the landscape. This study found evidence that leaf-litter ants, a highly specialized guild of arthropods, are mainly impacted by landscape composition and the configuration of the focal remnant. Maintaining or improving the surrounding landscape quality of riparian vegetation remnants can stimulate the movement of biodiversity among forest and riparian remnants and foster the provision of ecosystem services by these ecosystems. Effective outcomes may be achieved by considering scientific knowledge during the early stages of riparian policy formulation, in addition to integrating riparian management strategies with broader environmental planning instruments. PMID:28234948

  10. Free vibration analysis of embedded magneto-electro-thermo-elastic cylindrical nanoshell based on the modified couple stress theory

    NASA Astrophysics Data System (ADS)

    Ghadiri, Majid; Safarpour, Hamed

    2016-09-01

    In this paper, size-dependent effect of an embedded magneto-electro-elastic (MEE) nanoshell subjected to thermo-electro-magnetic loadings on free vibration behavior is investigated. Also, the surrounding elastic medium has been considered as the model of Winkler characterized by the spring. The size-dependent MEE nanoshell is investigated on the basis of the modified couple stress theory. Taking attention to the first-order shear deformation theory (FSDT), the modeled nanoshell and its equations of motion are derived using principle of minimum potential energy. The accuracy of the presented model is validated with some cases in the literature. Finally, using the Navier-type method, an analytical solution of governing equations for vibration behavior of simply supported MEE cylindrical nanoshell under combined loadings is presented and the effects of material length scale parameter, temperature changes, external electric potential, external magnetic potential, circumferential wave numbers, constant of spring, shear correction factor and length-to-radius ratio of the nanoshell on natural frequency are identified. Since there has been no research about size-dependent analysis MEE cylindrical nanoshell under combined loadings based on FSDT, numerical results are presented to be served as benchmarks for future analysis of MEE nanoshells using the modified couple stress theory.

  11. Characterizing Submonolayer Growth of 6P on Mica: Capture Zone Distributions vs. Growth Exponents and the Role of Hot Precursors

    NASA Astrophysics Data System (ADS)

    Einstein, T. L.; Morales-Cifuentes, Josue; Pimpinelli, Alberto

    2015-03-01

    Analyzing capture-zone distributions (CZD) using the generalized Wigner distribution (GWD) has proved a powerful way to access the critical nucleus size i. Of the several systems to which the GWD has been applied, we consider 6P on mica, for which Winkler's group found i ~ 3 . Subsequently they measured the growth exponent α (island density ~Fα , for flux F) of this system and found good scaling but different values at small and large F, which they attributed to DLA and ALA dynamics, but with larger values of i than found from the CZD analysis. We investigate this result in some detail. The third talk of this group describes a new universal relation between α and the characteristic exponent β of the GWD. The second talk reports the results of a proposed model that takes long-known transient ballistic adsorption into account, for the first time in a quantitative way. We find several intermediate scaling regimes, with distinctive values of α and an effective activation energy. One of these, rather than ALA, gives the best fit of the experimental data and a value of i consistent with the CZD analysis. Work at UMD supported by NSF CHE 13-05892.

  12. Overexpression of Polygalacturonase in Transgenic Apple Trees Leads to a Range of Novel Phenotypes Involving Changes in Cell Adhesion1

    PubMed Central

    Atkinson, Ross G.; Schröder, Roswitha; Hallett, Ian C.; Cohen, Daniel; MacRae, Elspeth A.

    2002-01-01

    Polygalacturonases (PGs) cleave runs of unesterified GalUA that form homogalacturonan regions along the backbone of pectin. Homogalacturonan-rich pectin is commonly found in the middle lamella region of the wall where two adjacent cells abut and its integrity is important for cell adhesion. Transgenic apple (Malus domestica Borkh. cv Royal Gala) trees were produced that contained additional copies of a fruit-specific apple PG gene under a constitutive promoter. In contrast to previous studies in transgenic tobacco (Nicotiana tabacum) where PG overexpression had no effect on the plant (K.W. Osteryoung, K. Toenjes, B. Hall, V. Winkler, A.B. Bennett [1990] Plant Cell 2: 1239–1248), PG overexpression in transgenic apple led to a range of novel phenotypes. These phenotypes included silvery colored leaves and premature leaf shedding due to reduced cell adhesion in leaf abscission zones. Mature leaves had malformed and malfunctioning stomata that perturbed water relations and contributed to a brittle leaf phenotype. Chemical and ultrastructural analyses were used to relate the phenotypic changes to pectin changes in the leaf cell walls. The modification of apple trees by a single PG gene has offered a new and unexpected perspective on the role of pectin and cell wall adhesion in leaf morphology and stomatal development. PMID:12011344

  13. Contact behavior modelling and its size effect on proton exchange membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Qiu, Diankai; Peng, Linfa; Yi, Peiyun; Lai, Xinmin; Janßen, Holger; Lehnert, Werner

    2017-10-01

    Contact behavior between the gas diffusion layer (GDL) and bipolar plate (BPP) is of significant importance for proton exchange membrane fuel cells. Most current studies on contact behavior utilize experiments and finite element modelling and focus on fuel cells with graphite BPPs, which lead to high costs and huge computational requirements. The objective of this work is to build a more effective analytical method for contact behavior in fuel cells and investigate the size effect resulting from configuration alteration of channel and rib (channel/rib). Firstly, a mathematical description of channel/rib geometry is outlined in accordance with the fabrication of metallic BPP. Based on the interface deformation characteristic and Winkler surface model, contact pressure between BPP and GDL is then calculated to predict contact resistance and GDL porosity as evaluative parameters of contact behavior. Then, experiments on BPP fabrication and contact resistance measurement are conducted to validate the model. The measured results demonstrate an obvious dependence on channel/rib size. Feasibility of the model used in graphite fuel cells is also discussed. Finally, size factor is proposed for evaluating the rule of size effect. Significant increase occurs in contact resistance and porosity for higher size factor, in which channel/rib width decrease.

  14. Combustion characteristics of hydrogen. Carbon monoxide based gaseous fuels

    NASA Technical Reports Server (NTRS)

    Notardonato, J. J.; White, D. J.; Kubasco, A. J.; Lecren, R. T.

    1981-01-01

    An experimental rig program was conducted with the objective of evaluating the combuston performance of a family of fuel gases based on a mixture of hydrogen and carbon monoxide. These gases, in addition to being members of a family, were also representative of those secondary fuels that could be produced from coal by various gasification schemes. In particular, simulated Winkler, Lurgi, and Blue-water low and medium energy content gases were used as fuels in the experimental combustor rig. The combustor used was originally designed as a low NOx rich-lean system for burning liquid fuels with high bound nitrogen levels. When used with the above gaseous fuels this combustor was operated in a lean-lean mode with ultra long residence times. The Blue-water gas was also operated in a rich-lean mode. The results of these tests indicate the possibility of the existence of an 'optimum' gas turbine hydrogen - carbon monoxide based secondary fuel. Such a fuel would exhibit NOx and high efficiency over the entire engine operating range. It would also have sufficient stability range to allow normal light-off and engine acceleration. Solar Turbines Incorporated would like to emphasize that the results presented here have been obtained with experimental rig combustors. The technologies generated could, however, be utilized in future commercial gas turbines.

  15. Free vibration of an embedded single-walled carbon nanotube with various boundary conditions using the RMVT-based nonlocal Timoshenko beam theory and DQ method

    NASA Astrophysics Data System (ADS)

    Wu, Chih-Ping; Lai, Wei-Wen

    2015-04-01

    The nonlocal Timoshenko beam theories (TBTs), based on the Reissner mixed variation theory (RMVT) and principle of virtual displacement (PVD), are derived for the free vibration analysis of a single-walled carbon nanotube (SWCNT) embedded in an elastic medium and with various boundary conditions. The strong formulations of the nonlocal TBTs are derived using Hamilton's principle, in which Eringen's nonlocal constitutive relations are used to account for the small-scale effect. The interaction between the SWCNT and its surrounding elastic medium is simulated using the Winkler and Pasternak foundation models. The frequency parameters of the embedded SWCNT are obtained using the differential quadrature (DQ) method. In the cases of the SWCNT without foundations, the results of RMVT- and PVD-based nonlocal TBTs converge rapidly, and their convergent solutions closely agree with the exact ones available in the literature. Because the highest order with regard to the derivatives of the field variables used in the RMVT-based nonlocal TBT is lower than that used in its PVD-based counterpart, the former is more efficient than the latter with regard to the execution time. The former is thus both faster and obtains more accurate solutions than the latter for the numerical analysis of the embedded SWCNT.

  16. Linear stochastic evaluation of tyre vibration due to tyre/road excitation

    NASA Astrophysics Data System (ADS)

    Rustighi, E.; Elliott, S. J.; Finnveden, S.; Gulyás, K.; Mócsai, T.; Danti, M.

    2008-03-01

    Tyre/road interaction is recognised as the main source of interior and exterior noise for velocities over the 40 km/h. In this paper, a three-dimensional (3D) elemental approach has been adopted to predict the stochastic tyre vibration and hence the interior and exterior noise due to this kind of excitation. The road excitation has been modelled from the spectral density of a common road profile, supposing the road to be an isotropic surface. A linear Winkler bedding connects the 3D model of the tyre with the ground. The exterior noise has been evaluated by an elemental calculation of the radiation matrix of the tyre deformed by the static load on a concrete road. The noise inside the vehicle has also been calculated, using the transfer functions from the force transmitted to the hub and the noise inside the vehicle, which have been computed by a FEM model of a common car body. The simple formulation allows much quicker calculation than traditional nonlinear approaches, and appears to give results consistent with available measurements, although the effects of tyre rotation and of the nonlinearities in the contact model are yet to be quantified, and the method requires further experimental validation before practical application.

  17. Rock avalanche deposits in Alai Valley, Central Asia: misinterpretation of glacial record

    NASA Astrophysics Data System (ADS)

    Reznichenko, Natalya; Davies, Tim; Robinson, Tom; De Pascale, Gregory

    2013-04-01

    The reconstruction of Quaternary glaciations has been restricted by conventional approaches with resulting contradictions in interpretation of the regional glacial record, that recently have been subjected to critical re-evaluation. Along with uncertainties in dating techniques and their applicability to particular landforms (Kirkbride and Winkler, 2012), it has recently been demonstrated that the presence of rock avalanche debris in a landform can be unequivocally detected; this allows for the first time definitive identification of and distinction between glacial moraines and landslide deposits. It also identifies moraines that have formed due to rock avalanche deposition on glaciers, possibly with no associated climatic signal (Reznichenko et al., 2012). Confusion between landslide deposits and moraines is evident for ranges in Central Asia (e.g., Hewitt, 1999) where the least-studied glacial record is selectively correlated with established glacial chronologies in Alpine ranges, which in turn masks the actual glacial extent and their responses to climate change, tectonics and landsliding activity. We describe examples in the glaciated Alai Valley, large intermountain depression between the Zaalay Range of the Northern Pamir and the Alay Range of the Southern Tien-Shan, showing that some large Quaternary deposits classically interpreted as moraines are of rock avalanche origin. Sediment from these deposits has been tested for the presence of agglomerates that are only produced under high stress conditions during rock avalanche motion, and are absent from glacial sediments (Reznichenko et al., 2012). This reveals that morphologically-similar deposits have radically different geneses: rock avalanche origin for a deposit in the Komansu river catchment and glacial origin for deposits in the Ashiktash and Kyzylart catchments. The enormous Komansu rock avalanche deposit, probably triggered by a rupture of the Main Pamir thrust, currently covers about 100 km2 with a minimum estimated volume more than 1 x 109 m3. Another smaller rock avalanche deposit rests on the Lenin Glacial sediment in the neighbour Ashiktash river catchment, which was previously suggested to originate from Mt. Lenin (7134 m). The revised origin of these deposits highlights the role of rock avalanches in glacial activity and in the resulting glacial record in this valley and other actively tectonic areas of Central Asia. Although further investigation is required to detail the geneses, magnitudes and ages for these and other landforms in the valley, this study contributes explicit evidence for contamination of palaeoclimate proxies with data from non-climatic events, and reinforces the urgent need for revised interpretation of the glacial chronologies. Hewitt, K., 1999. Quaternary moraines vs. catastrophic rock avalanches in the Karakoram Himalaya, Northern Pakistan. Quaternary Research, v. 51, p. 220-237. Kirkbride, M.P., and Winkler, S., 2012. Correlation of Late Quaternary moraines: impact of climate variability, glacier response, and chronological resolution: Quaternary Science Reviews, v. 46, p. 1-29. Reznichenko, N.V., Davies, T.R.H., Shulmeister, J. and Larsen S.H, 2012. A new technique for identifying rock-avalanche-sourced sediment in moraines and some paleoclimatic implications. Geology, v. 40, p. 319-322.

  18. Biodegradation of BOD and ammonia-free using bacterial consortium in aerated fixed film bioreactor (AF2B)

    NASA Astrophysics Data System (ADS)

    Prayitno, Rulianah, Sri; Saroso, Hadi; Meilany, Diah

    2017-06-01

    BOD and Ammonia-free (NH3-N) are pollutants of hospital wastewater which often exceed the quality standards. It is because biological processes in wastewater treatment plant (WWTP) have not been effective in degrading BOD and NH3-N. Therefore, a study on factors that influence the biodegradation of BOD and NH3-N by choosing the type of bacteria to improve the mechanisms of biodegradation processes is required. Bacterial consortium is a collection of several types of bacteria obtained from isolation process, which is known to be more effective than a single bacterial in degrading pollutants. On the other hand, AF2B is a type of reactor in wastewater treatment system. The AF2B contains a filter media that has a large surface area so that the biodegradation process of pollutants by microorganism can be improved. The objective of this research is to determine the effect of volume of starter and air supplies on decreasing BOD and NH3-N in hospital wastewater using bacterial consortium in the AF2B on batch process. The research was conducted in three stages: the making of the growth curve of the bacterial consortium, bacterial consortium acclimatization, and hospital wastewater treatment in the AF2B with batch process. The variables used are the volume of starter (65%, 75%, and 85% in volume) and air supplies (2.5, 5, and 7.5 L/min). Meanwhile, the materials used are hospital wastewater, bacterial consortium (Pseudomonas diminuta, Pseudomonas capica, Bacillius sp, and Nitrobacter sp), blower, and AF2B. AF2B is a plastic basin containing a filter media with a wasp-nest shape used as a medium for growing the bacterial consortium. In the process of making the growth curve, a solid form of bacterial consortium was dissolved in sterilized water, then grown in a nutrient broth (NB). Then, shaking and sampling were done at any time to determine the path growth of bacterial consortium. In the acclimatization process, bacterial isolates were grown using hospital wastewater as a media that was added gradually, followed by the addition of nutrients and aeration. Furthermore, in the biodegradation process of AF2B, the result of acclimatization (as a starter) was fed into the AF2B, then added to the hospital wastewater at a certain volume (as variables), and followed by aeration at a certain flow rate (as variables). Sampling was done at any time to determine the decrease of the concentration of BOD, NH3-N, and MLSS (Mixed Liqour Suspended Solid). BOD and Ammonia-free analyses were conducted using winkler bottle titration and spectrophotometry method. MLSS analysis used gravimetric methods. The results of the research shos that the volume of starter 85% (v) and air supplies of 7.5 L/min can reduce BOD and NH3-N of 92% and 76% respectively. Besides that, AF2B and bacterial consortium have a great ability and are very fast in degrading BOD and NH3-N.

  19. Pulsating potentiometric titration technique for assay of dissolved oxygen in water at trace level.

    PubMed

    Sahoo, P; Ananthanarayanan, R; Malathi, N; Rajiniganth, M P; Murali, N; Swaminathan, P

    2010-06-11

    A simple but high performance potentiometric titration technique using pulsating sensors has been developed for assay of dissolved oxygen (DO) in water samples down to 10.0 microg L(-1) levels. The technique involves Winkler titration chemistry, commonly used for determination of dissolved oxygen in water at mg L(-1) levels, with modification in methodology for accurate detection of end point even at 10.0 microg L(-1) levels DO present in the sample. An indigenously built sampling cum pretreatment vessel has been deployed for collection and chemical fixing of dissolved oxygen in water samples from flowing water line without exposure to air. A potentiometric titration facility using pulsating sensors developed in-house is used to carry out titration. The power of the titration technique has been realised in estimation of very dilute solution of iodine equivalent to 10 microg L(-1) O(2). Finally, several water samples containing dissolved oxygen from mg L(-1) to microg L(-1) levels were successfully analysed with excellent reproducibility using this new technique. The precision in measurement of DO in water at 10 microg L(-1) O(2) level is 0.14 (n=5), RSD: 1.4%. Probably for the first time a potentiometric titration technique has been successfully deployed for assay of dissolved oxygen in water samples at 10 microg L(-1) levels. Copyright 2010 Elsevier B.V. All rights reserved.

  20. A global database of ant species abundances

    USGS Publications Warehouse

    Gibb, Heloise; Dunn, Rob R.; Sanders, Nathan J.; Grossman, Blair F.; Photakis, Manoli; Abril, Silvia; Agosti, Donat; Andersen, Alan N.; Angulo, Elena; Armbrecht, Ingre; Arnan, Xavier; Baccaro, Fabricio B.; Bishop, Tom R.; Boulay, Raphael; Bruhl, Carsten; Castracani, Cristina; Cerda, Xim; Del Toro, Israel; Delsinne, Thibaut; Diaz, Mireia; Donoso, David A.; Ellison, Aaron M.; Enriquez, Martha L.; Fayle, Tom M.; Feener Jr., Donald H.; Fisher, Brian L.; Fisher, Robert N.; Fitpatrick, Matthew C.; Gomez, Cristanto; Gotelli, Nicholas J.; Gove, Aaron; Grasso, Donato A.; Groc, Sarah; Guenard, Benoit; Gunawardene, Nihara; Heterick, Brian; Hoffmann, Benjamin; Janda, Milan; Jenkins, Clinton; Kaspari, Michael; Klimes, Petr; Lach, Lori; Laeger, Thomas; Lattke, John; Leponce, Maurice; Lessard, Jean-Philippe; Longino, John; Lucky, Andrea; Luke, Sarah H.; Majer, Jonathan; McGlynn, Terrence P.; Menke, Sean; Mezger, Dirk; Mori, Alessandra; Moses, Jimmy; Munyai, Thinandavha Caswell; Pacheco, Renata; Paknia, Omid; Pearce-Duvet, Jessica; Pfeiffer, Martin; Philpott, Stacy M.; Resasco, Julian; Retana, Javier; Silva, Rogerio R.; Sorger, Magdalena D.; Souza, Jorge; Suarez, Andrew V.; Tista, Melanie; Vasconcelos, Heraldo L.; Vonshak, Merav; Weiser, Michael D.; Yates, Michelle; Parr, Catherine L.

    2017-01-01

    What forces structure ecological assemblages? A key limitation to general insights about assemblage structure is the availability of data that are collected at a small spatial grain (local assemblages) and a large spatial extent (global coverage). Here, we present published and unpublished data from 51,388 ant abundance and occurrence records of more than 2693 species and 7953 morphospecies from local assemblages collected at 4212 locations around the world. Ants were selected because they are diverse and abundant globally, comprise a large fraction of animal biomass in most terrestrial communities, and are key contributors to a range of ecosystem functions. Data were collected between 1949 and 2014, and include, for each geo-referenced sampling site, both the identity of the ants collected and details of sampling design, habitat type and degree of disturbance. The aim of compiling this dataset was to provide comprehensive species abundance data in order to test relationships between assemblage structure and environmental and biogeographic factors. Data were collected using a variety of standardised methods, such as pitfall and Winkler traps, and will be valuable for studies investigating large-scale forces structuring local assemblages. Understanding such relationships is particularly critical under current rates of global change. We encourage authors holding additional data on systematically collected ant assemblages, especially those in dry and cold, and remote areas, to contact us and contribute their data to this growing dataset.

  1. [Oxygen consumption rate and effects of hypoxia stress on enzyme activities of Sepiella maindron].

    PubMed

    Wang, Chun-lin; Wu, Dan-hua; Dong, Tian-ye; Jiang, Xia-min

    2008-11-01

    The oxygen consumption rate and suffocation point of Sepiella maindroni were determined through the measurement of dissolved oxygen in control and experimental respiration chambers by Winkler's method, and the changes of S. maindroni enzyme activities under different levels of hypoxia stress were studied. The results indicated that the oxygen consumption rate of S. maindroni exhibited an obvious diurnal fluctuation of 'up-down-up-down', and positively correlated with water temperature (16 degrees C-28 degrees C) and illumination (3-500 micromol x m(-2) x s(-1)) while negatively correlated with water pH (6.25-9.25). With increasing water salinity from 18.1 to 29.8, the oxygen consumption rate had a variation of 'up-down-up', being the lowest at salinity 24. 8. Female S. maindroni had a higher oxygen consumption rate than male S. maindroni. The suffocation point of S. maindroni decreased with its increasing body mass, and that of (38.70 +/- 0.52) g in mass was (0.9427 +/- 0.0318) mg x L(-1). With the increase of hypoxia stress, the activities of superoxide dismutase (SOD), catalase (CAT) and peroxidase (POD) decreased after an initial increase, lipase activity decreased, protease activity had a variation of 'decrease-increase-decrease', and lactate dehydrogenase (LDH) activity had a trend of increasing first and decreasing then. The enzyme activities were higher under hypoxia stress than under normal conditions.

  2. Creep rupture analysis of a beam resting on high temperature foundation

    NASA Technical Reports Server (NTRS)

    Gu, Randy J.; Cozzarelli, Francis A.

    1988-01-01

    A simplified uniaxial strain controlled creep damage law is deduced with the use of experimental observation from a more complex strain dependent law. This creep damage law correlates the creep damage, which is interpreted as the density variation in the material, directly with the accumulated creep strain. Based on the deduced uniaxial strain controlled creep damage law, a continuum mechanical creep rupture analysis is carried out for a beam resting on a high temperature elastic (Winkler) foundation. The analysis includes the determination of the nondimensional time for initial rupture, the propagation of the rupture front with the associated thinning of the beam, and the influence of creep damage on the deflection of the beam. Creep damage starts accumulating in the beam as soon as the load is applied, and a creep rupture front develops at and propagates from the point at which the creep damage first reaches its critical value. By introducing a series of fundamental assumptions within the framework of technical Euler-Bernoulli type beam theory, a governing set of integro-differential equations is derived in terms of the nondimensional bending moment and the deflection. These governing equations are subjected to a set of interface conditions at the propagating rupture front. A numerical technique is developed to solve the governing equations together with the interface equations, and the computed results are presented and discussed in detail.

  3. A non-classical Mindlin plate model incorporating microstructure, surface energy and foundation effects.

    PubMed

    Gao, X-L; Zhang, G Y

    2016-07-01

    A non-classical model for a Mindlin plate resting on an elastic foundation is developed in a general form using a modified couple stress theory, a surface elasticity theory and a two-parameter Winkler-Pasternak foundation model. It includes all five kinematic variables possible for a Mindlin plate. The equations of motion and the complete boundary conditions are obtained simultaneously through a variational formulation based on Hamilton's principle, and the microstructure, surface energy and foundation effects are treated in a unified manner. The newly developed model contains one material length-scale parameter to describe the microstructure effect, three surface elastic constants to account for the surface energy effect, and two foundation parameters to capture the foundation effect. The current non-classical plate model reduces to its classical elasticity-based counterpart when the microstructure, surface energy and foundation effects are all suppressed. In addition, the new model includes the Mindlin plate models considering the microstructure dependence or the surface energy effect or the foundation influence alone as special cases, recovers the Kirchhoff plate model incorporating the microstructure, surface energy and foundation effects, and degenerates to the Timoshenko beam model including the microstructure effect. To illustrate the new Mindlin plate model, the static bending and free vibration problems of a simply supported rectangular plate are analytically solved by directly applying the general formulae derived.

  4. A sound and efficient measure of joint congruence.

    PubMed

    Conconi, Michele; Castelli, Vincenzo Parenti

    2014-09-01

    In the medical world, the term "congruence" is used to describe by visual inspection how the articular surfaces mate each other, evaluating the joint capability to distribute an applied load from a purely geometrical perspective. Congruence is commonly employed for assessing articular physiology and for the comparison between normal and pathological states. A measure of it would thus represent a valuable clinical tool. Several approaches for the quantification of joint congruence have been proposed in the biomechanical literature, differing on how the articular contact is modeled. This makes it difficult to compare different measures. In particular, in previous articles a congruence measure has been presented which proved to be efficient and suitable for the clinical practice, but it was still empirically defined. This article aims at providing a sound theoretical support to this congruence measure by means of the Winkler elastic foundation contact model which, with respect to others, has the advantage to hold also for highly conforming surfaces as most of the human articulations are. First, the geometrical relation between the applied load and the resulting peak of pressure is analytically derived from the elastic foundation contact model, providing a theoretically sound approach to the definition of a congruence measure. Then, the capability of congruence measure to capture the same geometrical relation is shown. Finally, the reliability of congruence measure is discussed. © IMechE 2014.

  5. Criteria on global boundedness versus finite time blow-up to a two-species chemotaxis system with two chemicals

    NASA Astrophysics Data System (ADS)

    Yu, Hao; Wang, Wei; Zheng, Sining

    2018-02-01

    This paper considers the two-species chemotaxis system with two chemicals in a smooth bounded domain Ω\\subset{R}2 , subject to the non-flux boundary condition, and χ, ξ, α, β, γ, δ>0 . We obtain a blow-up criterion that if m_1m_2-2π(\\frac{m_1}χβ+\\frac{m_2}ξδ)>0 , then there exist finite time blow-up solutions to the system with m_1:=\\int_Ω u_0(x)dx and m_2:=\\int_Ω w_0(x)dx . When χ=ξ= β=δ=1 , the blow-up criterion becomes m_1m_2-2π(m_1+m_2)>0 , and the global boundedness of solutions is furthermore established with α=γ=1 under the condition that \\max\\{m_1, m_2\\}<4π . This improves the current results for finite time blow-up with \\min\\{m_1, m_2\\}>4π and global boundedness with \\max\\{m_1, m_2\\}<\\frac{4}{C_GN} respectively in Tao and Winkler (2015 Discrete Contin. Dyn. Syst. Ser. B 20 3165-83) Supported by the National Natural Science Foundation of China (11171048), the Science Foundation of Liaoning Education Department grant (LYB201601) and the Fundamental Research Funds for the Central Universities (DUT16LK24).

  6. A survey of the carbonate system in the Levantine Mediterranean Sub-basin

    NASA Astrophysics Data System (ADS)

    El Rahman Hassoun, Abed; Gemayel, Elissar; Abboud-Abi Saab, Marie

    2016-04-01

    The carbonate system is very important since it regulates the pH of the seawater and controls the circulation of CO2 between the various natural reservoirs. Recently, several oceanographic cruises have been carried out to assess this system in the Mediterranean Sea. However, the measurements undertaken to quantify the carbonate system parameters in the Levantine Sub-basin remain scarce and occasional. In our study, we are compiling the occasional data taken near Lebanon and surveying the carbonate system in the Lebanese seawaters for the first time by fixing two stations off the Lebanese coast to study the monthly and annual variations of this system through the water column. The dominant processes changing the carbonate chemistry of a seawater can be described by considering changes in the total alkalinity (AT) and the total dissolved inorganic carbon (CT). To measure these parameters, the collected seawater samples are titrated via potentiometric acid titration using a closed cell (DOE, 1994). Further, the temperature and the salinity are measured in situ. Dissolved oxygen concentrations are measured using a Winkler iodometric titration. Nutrients (phosphates, nitrates, nitrites), chlorophyll a and phytoplankton populations are also studied. The compilation of the carbonate system data taken from the cruises conducted near Cyprus (MedSeA 2013, Meteor 84-3, BOUM, Meteor 51-2) indicate that the AT and CT averages are equal to 2617 ±15 and 2298 ± 9 μmol kg-1 respectively, showing high AT and CT concentrations compared to those measured in other Mediterranean sub-basins. Our survey will provide a brand new dataset that will be useful to better comprehend the carbonate system in the Mediterranean Sea in general and the actual situation of the water masses formation in the Levantine Sub-basin after the Eastern Mediterranean Transient (EMT) in particular. Moreover, this work will permit us to estimate the air-sea fluxes and to estimate the anthropogenic CO2 concentrations and the acidification rate in the Lebanese seawaters for the first time. Keywords: Total alkalinity, total dissolved inorganic carbon, carbonate system, Lebanon, Levantine Sub-basin, Mediterranean Sea.

  7. A global database of ant species abundances.

    PubMed

    Gibb, Heloise; Dunn, Rob R; Sanders, Nathan J; Grossman, Blair F; Photakis, Manoli; Abril, Silvia; Agosti, Donat; Andersen, Alan N; Angulo, Elena; Armbrecht, Inge; Arnan, Xavier; Baccaro, Fabricio B; Bishop, Tom R; Boulay, Raphaël; Brühl, Carsten; Castracani, Cristina; Cerda, Xim; Del Toro, Israel; Delsinne, Thibaut; Diaz, Mireia; Donoso, David A; Ellison, Aaron M; Enriquez, Martha L; Fayle, Tom M; Feener, Donald H; Fisher, Brian L; Fisher, Robert N; Fitzpatrick, Matthew C; Gómez, Crisanto; Gotelli, Nicholas J; Gove, Aaron; Grasso, Donato A; Groc, Sarah; Guenard, Benoit; Gunawardene, Nihara; Heterick, Brian; Hoffmann, Benjamin; Janda, Milan; Jenkins, Clinton; Kaspari, Michael; Klimes, Petr; Lach, Lori; Laeger, Thomas; Lattke, John; Leponce, Maurice; Lessard, Jean-Philippe; Longino, John; Lucky, Andrea; Luke, Sarah H; Majer, Jonathan; McGlynn, Terrence P; Menke, Sean; Mezger, Dirk; Mori, Alessandra; Moses, Jimmy; Munyai, Thinandavha Caswell; Pacheco, Renata; Paknia, Omid; Pearce-Duvet, Jessica; Pfeiffer, Martin; Philpott, Stacy M; Resasco, Julian; Retana, Javier; Silva, Rogerio R; Sorger, Magdalena D; Souza, Jorge; Suarez, Andrew; Tista, Melanie; Vasconcelos, Heraldo L; Vonshak, Merav; Weiser, Michael D; Yates, Michelle; Parr, Catherine L

    2017-03-01

    What forces structure ecological assemblages? A key limitation to general insights about assemblage structure is the availability of data that are collected at a small spatial grain (local assemblages) and a large spatial extent (global coverage). Here, we present published and unpublished data from 51 ,388 ant abundance and occurrence records of more than 2,693 species and 7,953 morphospecies from local assemblages collected at 4,212 locations around the world. Ants were selected because they are diverse and abundant globally, comprise a large fraction of animal biomass in most terrestrial communities, and are key contributors to a range of ecosystem functions. Data were collected between 1949 and 2014, and include, for each geo-referenced sampling site, both the identity of the ants collected and details of sampling design, habitat type, and degree of disturbance. The aim of compiling this data set was to provide comprehensive species abundance data in order to test relationships between assemblage structure and environmental and biogeographic factors. Data were collected using a variety of standardized methods, such as pitfall and Winkler traps, and will be valuable for studies investigating large-scale forces structuring local assemblages. Understanding such relationships is particularly critical under current rates of global change. We encourage authors holding additional data on systematically collected ant assemblages, especially those in dry and cold, and remote areas, to contact us and contribute their data to this growing data set. © 2016 by the Ecological Society of America.

  8. Diversity of the ground-dwelling ant fauna (Hymenoptera: Formicidae) of a moist, Montane forest of the semi-arid Brazilian "Nordeste".

    PubMed

    Hites, N L; Mourão, M A N; Araújo, F O; Melo, M V C; de Biseau, J C; Quinet, Y

    2005-01-01

    Although the so called "green islands" of the semi-arid Brazilian "Nordeste" are economically, socially, and ecologically important. relatively little is known about their biodiversity. We present the results of the first survey of the ground-dwelling ant fauna of a secondary forest in the Serra de Baturité (4 degrees 05'-4 degrees 40' S / 38 degrees 30'-39 degrees 10' W), among the biggest of the moist, montane forests of the state of Ceará, Brazil. From February to March 2001, samples were taken every 50 m along twelve 200 m transects, each separated from the others by at least 50 m and cut on either side of a recreational trail. Where possible, two transects were cut from the same starting point on the trail, one on either side. At each sample site two methods were used, as recommended in the ALL. protocol: a pitfall trap and the treatment of 1 m2 of leaf litter with the Winkler extractor. The myrmecofauna of the Serra de Baturité is quite diverse: individuals from 72 species, 23 genera, and six subfamilies were collected. The observed patterns of specific richness show the same tendencies noted in other tropical regions, particularly the frequency of capture distribution with many rare and few abundant species. Differences with the Atlantic and Amazonian forests were also observed, especially the relative importance of the Ponerinac and Formicinae subfamilies, indicating a possible influence of the surrounding "caatinga" (savanna-like ecosystem) on the myrmecofauna of the moist, montane forest.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lifeng, E-mail: walfe@nuaa.edu.cn; Hu, Haiyan

    The thermal vibration of a rectangular single-layered graphene sheet is investigated by using a rectangular nonlocal elastic plate model with quantum effects taken into account when the law of energy equipartition is unreliable. The relation between the temperature and the Root of Mean Squared (RMS) amplitude of vibration at any point of the rectangular single-layered graphene sheet in simply supported case is derived first from the rectangular nonlocal elastic plate model with the strain gradient of the second order taken into consideration so as to characterize the effect of microstructure of the graphene sheet. Then, the RMS amplitude of thermalmore » vibration of a rectangular single-layered graphene sheet simply supported on an elastic foundation is derived. The study shows that the RMS amplitude of the rectangular single-layered graphene sheet predicted from the quantum theory is lower than that predicted from the law of energy equipartition. The maximal relative difference of RMS amplitude of thermal vibration appears at the sheet corners. The microstructure of the graphene sheet has a little effect on the thermal vibrations of lower modes, but exhibits an obvious effect on the thermal vibrations of higher modes. The quantum effect is more important for the thermal vibration of higher modes in the case of smaller sides and lower temperature. The relative difference of maximal RMS amplitude of thermal vibration of a rectangular single-layered graphene sheet decreases monotonically with an increase of temperature. The absolute difference of maximal RMS amplitude of thermal vibration of a rectangular single-layered graphene sheet increases slowly with the rising of Winkler foundation modulus.« less

  10. Measuring respiration rates in marine fish larvae: challenges and advances.

    PubMed

    Peck, M A; Moyano, M

    2016-01-01

    Metabolic costs can be extremely high in marine fish larvae and gaining reliable estimates of the effects of intrinsic and extrinsic factors on those costs is important to understand environmental constraints on early growth and survival. This review provides an historical perspective of measurements of larval marine fish respiration (O2 consumption) including the methods (Winkler, manometric, polarographic, paramagnetic and optodes) and systems (closed system to intermittent-flow) used. This study compares and systematically reviews the results (metabolic rates, ontogenetic changes and taxonomic differences) obtained from 59 studies examining 53 species from 30 families. Standard (anaesthetized or darkness), routine and active respiration rates were reported in 14, 94 and 8% of the studies and much more work has been performed on larvae of temperate (88%) compared with tropical (9%) and polar (3%) species. More than 35% of the studies have been published since 2000 owing to both advances in oxygen sensors and the growing emphasis on understanding physiological effects of environmental change. Common protocols are needed to facilitate cross-taxa comparisons such as the effect of temperature (Q10 : 1·47-3·47), body mass (slope of allometric changes in O2 consumption rate from 0·5 to 1·3) and activity level on metabolic costs as measured via respiration rate. A set of recommendations is provided that will make it easier for researchers to design measurement systems, to judge the reliability of measurements and to make inter-comparisons among studies and species. © 2016 The Fisheries Society of the British Isles.

  11. The diversity of beetle assemblages in different habitat types in Sabah, Malaysia.

    PubMed

    Chung, A Y; Eggleton, P; Speight, M R; Hammond, P M; Chey, V K

    2000-12-01

    The diversity of beetle assemblages in different habitat types (primary forest, logged forest, acacia plantation and oil palm plantation) in Sabah, Malaysia was investigated using three different methods based on habitat levels (Winkler sampling, flight-interception-trapping and mist-blowing). The overall diversity was extremely high, with 1711 species recorded from only 8028 individuals and 81 families (115 family and subfamily groups). Different degrees of environmental changes had varying effects on the beetle species richness and abundance, with oil palm plantation assemblage being most severely affected, followed by acacia plantation and then logged forest. A few species became numerically dominant in the oil palm plantation. In terms of beetle species composition, the acacia fauna showed much similarity with the logged forest fauna, and the oil palm fauna was very different from the rest. The effects of environmental variables (number of plant species, sapling and tree densities, amount of leaf litter, ground cover, canopy cover, soil pH and compaction) on the beetle assemblage were also investigated. Leaf litter correlated with species richness, abundance and composition of subterranean beetles. Plant species richness, tree and sapling densities correlated with species richness, abundance and composition of understorey beetles while ground cover correlated only with the species richness and abundance of these beetles. Canopy cover correlated only with arboreal beetles. In trophic structure, predators represented more than 40% of the species and individuals. Environmental changes affected the trophic structure with proportionally more herbivores (abundance) but fewer predators (species richness and abundance) in the oil palm plantation. Biodiversity, conservation and practical aspects of pest management were also highlighted in this study.

  12. Boundedness for a 3D chemotaxis-Stokes system with porous medium diffusion and tensor-valued chemotactic sensitivity

    NASA Astrophysics Data System (ADS)

    Wang, Yilong; Li, Xie

    2017-04-01

    This paper deals with the following chemotaxis-Stokes system n_t+u\\cdot nabla n=Δ n^m-nabla \\cdot (nS(x,n,c)\\cdot nabla c), &{}quad xin Ω , t>0, c_t+u\\cdot nabla c=Δ c-nf(c),&{}quad xin Ω , t>0, u_t=Δ u+nabla P+nnabla φ ,&quad xin Ω , t>0,\\ nabla \\cdot u=0,&{}quad xin Ω , t>0. under no-flux boundary conditions in a bounded domain Ω subset R3 with smooth boundary, where m≥ 1, φ in W^{1,∞}(Ω ), f and S are given functions with values in [0, ∞) and R^{3× 3}, respectively. Here S satisfies |S(x,n,c)|7/6, which insures the global existence of bounded weak solution. Our result covers completely and improves the recent result by Wang and Cao (Discrete Contin Dyn Syst Ser B 20:3235-3254, 2015) which asserts, just in the case m=1, the global existence of solutions, but without boundedness, and that by Winkler (Calc Var Partial Differ Equ 54:3789-3828, 2015) which only involves the case of α =0 and requires the convexity of the domain.

  13. Studying the influence of surface effects on vibration behavior of size-dependent cracked FG Timoshenko nanobeam considering nonlocal elasticity and elastic foundation

    NASA Astrophysics Data System (ADS)

    Ghadiri, Majid; Soltanpour, Mahdi; Yazdi, Ali; Safi, Mohsen

    2016-05-01

    Free transverse vibration of a size-dependent cracked functionally graded (FG) Timoshenko nanobeam resting on a polymer elastic foundation is investigated in the present study. Also, all of the surface effects: surface density, surface elasticity and residual surface tension are studied. Moreover, satisfying the balance condition between the nanobeam and its surfaces was discussed. According to the power-law distribution, it is supposed that the material properties of the FG nanobeam are varying continuously across the thickness. Considering the small-scale effect, the Eringen's nonlocal theory is used; accounting the effect of polymer elastic foundation, the Winkler model is proposed. For this purpose, the equations of motion of the FG Timoshenko nanobeam and boundary conditions are obtained using Hamilton's principle. To find the analytical solutions for equations of motion of the FG nanobeam, the separation of variables method is employed. Two cases of boundary conditions, i.e., simply supported-simply supported (SS) and clamped-clamped (CC) are investigated in the present work. Numerical results are demonstrating a good agreement between the results of the present study and some available cases in the literature. The emphasis of the present study is on investigating the effect of various parameters such as crack severity, crack position, gradient index, mode number, nonlocal parameter, elastic foundation parameter and nanobeam length. It is clearly revealed that the vibrational behavior of a FG nanobeam is depending significantly on these effects. Also, these numerical results can be serving as benchmarks for future studies of FG nanobeams.

  14. Technology Performance Report: Duke Energy Notrees Wind Storage Demonstration Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wehner, Jeff; Mohler, David; Gibson, Stuart

    2015-11-01

    Duke Energy Renewables owns and operates the Notrees Wind Farm in west Texas’s Ector and Winkler counties. The wind farm, which was commissioned in April 2009, has a total capacity of 152.6 MW generated by 55 Vestas V82 turbines, one Vestas 1-V90 experimental turbine, and 40 GE 1.5-MW turbines. The Vestas V82 turbines have a generating capacity of 1.65 MW each, the Vestas V90 turbine has a generating capacity of 1.86 MW, and the GE turbines have a generating capacity of 1.5 MW each. The objective of the Notrees Wind Storage Demonstration Project is to validate that energy storage increasesmore » the value and practical application of intermittent wind generation and is commercially viable at utility scale. The project incorporates both new and existing technologies and techniques to evaluate the performance and potential of wind energy storage. In addition, it could serve as a model for others to adopt and replicate. Wind power resources are expected to play a significant part in reducing greenhouse gas emissions from electric power generation by 2030. However, the large variability and intermittent nature of wind presents a barrier to integrating it within electric markets, particularly when competing against conventional generation that is more reliable. In addition, wind power production often peaks at night or other times when demand and electricity prices are lowest. Energy storage systems can overcome those barriers and enable wind to become a valuable asset and equal competitor to conventional fossil fuel generation.« less

  15. Clock gene variation in Tachycineta swallows

    PubMed Central

    Dor, Roi; Cooper, Caren B; Lovette, Irby J; Massoni, Viviana; Bulit, Flor; Liljesthrom, Marcela; Winkler, David W

    2012-01-01

    Many animals use photoperiod cues to synchronize reproduction with environmental conditions and thereby improve their reproductive success. The circadian clock, which creates endogenous behavioral and physiological rhythms typically entrained to photoperiod, is well characterized at the molecular level. Recent work provided evidence for an association between Clock poly-Q length polymorphism and latitude and, within a population, an association with the date of laying and the length of the incubation period. Despite relatively high overall breeding synchrony, the timing of clutch initiation has a large impact on the fitness of swallows in the genus Tachycineta. We compared length polymorphism in the Clock poly-Q region among five populations from five different Tachycineta species that breed across a hemisphere-wide latitudinal gradient (Fig. 1). Clock poly-Q variation was not associated with latitude; however, there was an association between Clock poly-Q allele diversity and the degree of clutch size decline within breeding seasons. We did not find evidence for an association between Clock poly-Q variation and date of clutch initiation in for any of the five Tachycineta species, nor did we found a relationship between incubation duration and Clock genotype. Thus, there is no general association between latitude, breeding phenology, and Clock polymorphism in this clade of closely related birds. Figure 1 Photos of Tachycineta swallows that were used in this study: A) T. bicolor from Ithaca, New York, B) T. leucorrhoa from Chascomús, Argentina, C) T. albilinea from Hill Bank, Belize, D) T. meyeni from Puerto Varas, Chile, and E) T. thalassina from Mono Lake, California, Photographers: B: Valentina Ferretti; A, C-E: David Winkler. PMID:22408729

  16. Geologic map of the Wrangell-Saint Elias National Park and Reserve, Alaska

    USGS Publications Warehouse

    Richter, Donald H.; Preller, Cindi C.; Labay, Keith A.; Shew, Nora B.

    2006-01-01

    Wrangell-Saint Elias National Park and Preserve, the largest national park within the U.S. National Park Service system, extends from the northern Pacific Ocean to beyond the eastern Alaska Range into interior Alaska. It features impressively spectacular scenery such as high and craggy mountains, active and ancient volcanoes, expansive ice fields, immense tidewater glaciers, and a myriad of alpine glaciers. The park also includes the famous Kennecott Mine, a world-class copper deposit that was mined from 1911 to 1938, and remnant ghost town, which is now a National Historic Landmark. Geologic investigations encompassing Wrangell-Saint Elias National Park and Preserve began in 1796, with Dmitriv Tarkhanov, a Russian mining engineer, who unsuccessfully ventured up the Copper River in search of rumored copper. Lieutenant H.T. Allen (1897) of the U.S. Army made a successful epic summer journey with a limited military crew up the Copper River in 1885, across the Alaska Range, and down the Tanana and Yukon Rivers. Allen?s crew was supported by a prospector named John Bremner and local Eyak and Ahtna native guides whose tribes controlled access into the Copper River basin. Allen witnessed the Ahtnas? many uses of the native copper. His stories about the copper prompted prospectors to return to this area in search of the rich copper ore in the years following his journey. The region boasts a rich mining and exploration history prior to becoming a park in 1980. Several U.S. Geological Survey geologists have conducted reconnaissance surveys in the area since Allen?s explorations. This map is the result of their work and is enhanced by more detailed investigations, which began in the late 1950s and are still continuing. For a better understanding of the processes that have shaped the geology of the park and a history of the geologic investigations in the area, we recommend U.S. Geological Survey Professional Paper 1616, ?A Geologic Guide to Wrangell-Saint Elias National Park and Preserve, Alaska,? an exceptionally well illustrated and informative book by Gary R. Winkler, 2000. Geologically, the park consists of a collage of seven tectonostratigraphic terranes that formed south in the equatorial Pacific Ocean and rafted northward on oceanic plates, eventually accreting to Alaska and the North American continent. Each terrane features a distinct stratigraphy and is separated from neighboring terranes by major strike-slip or thrust faults.

  17. Monitoring the Diversity of Hunting Ants (Hymenoptera: Formicidae) on a Fragmented and Restored Andean Landscape.

    PubMed

    Herrera-Rangel, J; Jiménez-Carmona, E; Armbrecht, I

    2015-10-01

    Hunting ants are predators of organisms belonging to different trophic levels. Their presence, abundance, and diversity may reflect the diversity of other ants and contribute to evaluate habitat conditions. Between 2003 and 2005 the restoration of seven corridors in an Andean rural landscape of Colombia was performed. The restoration took place in lands that were formerly either forestry plantations or pasturelands. To evaluate restoration progress, hunting ants were intensely sampled for 7 yr, using sifted leaf litter and mini-Winkler, and pitfall traps in 21 plots classified into five vegetation types: forests, riparian forests, two types of restored corridors, and pasturelands. The ant communities were faithful to their habitat over time, and the main differences in ant composition, abundance, and richness were due to differences among land use types. The forests and riparian forests support 45% of the species in the landscape while the restored corridors contain between 8.3-25%. The change from forest to pasturelands represents a loss of 80% of the species. Ant composition in restored corridors was significantly different than in forests but restored corridors of soil of forestry plantations retained 16.7% more species than restored corridors from pasturelands. Ubiquitous hunting ants, Hypoponera opacior (Forel) and Gnamptogenys ca andina were usually associated with pastures and dominate restored corridors. Other cryptic, small, and specialized hunting ants are not present in the restored corridors. Results suggest that the history of land use is important for the biodiversity of hunting ants but also that corridors have not yet effectively contributed toward conservation goals. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Rock Magnetic Characterization of fine Particles from car Engines, Break pads and Tobacco: An Environmental Pilot Study

    NASA Astrophysics Data System (ADS)

    Herrero-Bervera, E.; Lopez, V. A.; Gerstnecker, K.; Swilley, B.

    2017-12-01

    Today, it is very well known that small magnetic particles are very harmful to the health of humans. For the first time we have conducted an environmental pilot study of fine magnetic particles on the island of Oahu, Hawaii, of particulate matter (pm) 60, pm=10, and pm= 2.5. In order to do a rock magnetic characterization we have preformed low field susceptibility versus temperature (k-T) experiments to determine the Curie points of small particles collected from exhaust pipes, as well as from brake pads of 4 different types of car engines using octane ratings of 85, 87 and 92. The Curie point determinations are very well defined and range from 292 °C through 393 °C to 660 °C. In addition, we have conducted magnetic granulometry experiments on raw tobacco, burnt ashes as well as on car engines and brake pads in question. The results of the experiments show ferro- and ferrimagnetic hysteresis loops with magnetic grain sizes ranging from superparamagnetic-multidomain (SP_MD), multi-domain (MD) and pseudo-single domain (PSD) shown on the modified Day et al. diagram of Dunlop (2002). Thus far, the results we have obtained from this pilot study are in agreement with other studies conducted from cigarette ashes from Bulgaria (Jordanova et al., 2005). Our results could be correlated to the traffic-related PM in Rome, Italy where the SP fraction mainly occurs as coating of MD particles that originated by localized stress in the oxidized outer shell surrounding the unoxidized core of magnetite like grains as published by Sagnotti and Winkler (2012).

  19. Wrinkling of a thin circular sheet bonded to a spherical substrate

    PubMed Central

    Kohn, Robert V.

    2017-01-01

    We consider a disc-shaped thin elastic sheet bonded to a compliant sphere. (Our sheet can slip along the sphere; the bonding controls only its normal displacement.) If the bonding is stiff (but not too stiff), the geometry of the sphere makes the sheet wrinkle to avoid azimuthal compression. The total energy of this system is the elastic energy of the sheet plus a (Winkler-type) substrate energy. Treating the thickness of the sheet h as a small parameter, we determine the leading-order behaviour of the energy as h tends to zero, and we give (almost matching) upper and lower bounds for the next-order correction. Our analysis of the leading-order behaviour determines the macroscopic deformation of the sheet; in particular, it determines the extent of the wrinkled region, and predicts the (non-trivial) radial strain of the sheet. The leading-order behaviour also provides insight about the length scale of the wrinkling, showing that it must be approximately independent of the distance r from the centre of the sheet (so that the number of wrinkles must increase with r). Our results on the next-order correction provide insight about how the wrinkling pattern should vary with r. Roughly speaking, they suggest that the length scale of wrinkling should not be exactly constant—rather, it should vary slightly, so that the number of wrinkles at radius r can be approximately piecewise constant in its dependence on r, taking values that are integer multiples of h−a with . This article is part of the themed issue ‘Patterning through instabilities in complex media: theory and applications’. PMID:28373380

  20. The effect of multi-directional nanocomposite materials on the vibrational response of thick shell panels with finite length and rested on two-parameter elastic foundations

    NASA Astrophysics Data System (ADS)

    Tahouneh, Vahid; Naei, Mohammad Hasan

    2016-03-01

    The main purpose of this paper is to investigate the effect of bidirectional continuously graded nanocomposite materials on free vibration of thick shell panels rested on elastic foundations. The elastic foundation is considered as a Pasternak model after adding a shear layer to the Winkler model. The panels reinforced by randomly oriented straight single-walled carbon nanotubes are considered. The volume fractions of SWCNTs are assumed to be graded not only in the radial direction, but also in axial direction of the curved panel. This study presents a 2-D six-parameter power-law distribution for CNTs volume fraction of 2-D continuously graded nanocomposite that gives designers a powerful tool for flexible designing of structures under multi-functional requirements. The benefit of using generalized power-law distribution is to illustrate and present useful results arising from symmetric, asymmetric and classic profiles. The material properties are determined in terms of local volume fractions and material properties by Mori-Tanaka scheme. The 2-D differential quadrature method as an efficient numerical tool is used to discretize governing equations and to implement boundary conditions. The fast rate of convergence of the method is shown and results are compared against existing results in literature. Some new results for natural frequencies of the shell are prepared, which include the effects of elastic coefficients of foundation, boundary conditions, material and geometrical parameters. The interesting results indicate that a graded nanocomposite volume fraction in two directions has a higher capability to reduce the natural frequency than conventional 1-D functionally graded nanocomposite materials.

  1. Neurocinematography in Pre-World War II Netherlands: The Magnus-Rademaker Collection.

    PubMed

    Koehler, Peter J; Lameris, Bregt; Hielscher, Eva

    2016-01-01

    Historical films made by neuroscientists have shown up in several countries during past years. Although originally supposed to have been lost, we recently found a collection of films produced between 1909 and 1940 by Rudolf Magnus (1873-1927), professor of pharmacology (Utrecht) and his student Gysbertus Rademaker (1887-1957), professor of physiology (1928, succeeding Willem Einthoven) and neurology (1945, both in Leiden). Both collections deal with the physiology of body posture by the equilibrium of reflex musculature contractions for which experimental studies were done with animals (labyrinthectomies, cerebellectomies, and brainstem sections) and observations on patients. The films demonstrate the results of these studies. Moreover, there are films with babies showing tonic neck reflexes and moving images capturing adults with cerebellar symptoms following cerebellectomies for tumors and several other conditions. Magnus' studies resulted in his well-known Körperstellung (1924, "Body Posture") and Rademaker's research in his Das Stehen (1931, "Standing"). The films probably had an educative and scientific purpose. Magnus demonstrated his films at congresses, including the Eighth International Congress of Physiologists (Vienna, 1910) and Rademaker screened his moving images at meetings of the Amsterdam Neurologists Society (at several occasions as reflected in the Winkler-Monakow correspondence and the Nederlands Tijdschrift voor Geneeskunde). Next to these purposes, the films were used to analyze movement and a series of images from the films were published in articles and books. The films are important historical sources that provide a portrait of the pre-World War II era in neuroscience, partly answering questions on how physicians dealt with patients and researchers with their laboratory animals. Moreover, the films confirm that cinematography was an important scientific tool in neuroscience research.

  2. A study on single lane-change manoeuvres for determining rearward amplification of multi-trailer articulated heavy vehicles with active trailer steering systems

    NASA Astrophysics Data System (ADS)

    Wang, Qiushi; He, Yuping

    2016-01-01

    The Society of Automotive Engineers issued a test procedure, SAE-J2179, to determine the rearward amplification (RA) of multi-trailer articulated heavy vehicles (MTAHVs). Built upon the procedure, the International Organization for Standardization released the test manoeuvres, ISO-14791, for evaluating directional performance of MTAHVs. For the RA measures, ISO-14791 recommends two single lane-change manoeuvres: (1) an open-loop procedure with a single sine-wave steering input; and (2) a closed-loop manoeuvre with a single sine-wave lateral acceleration input. For an articulated vehicle with active trailer steering (ATS), the RA measure in lateral acceleration under the open-loop manoeuvre was not in good agreement with that under the closed-loop manoeuvre. This observation motivates the research on the applicability of the two manoeuvres for the RA measures of MTAHVs with ATS. It is reported that transient response under the open-loop manoeuvre often leads to asymmetric curve of tractor lateral acceleration [Winkler CB, Fancher PS, Bareket Z, Bogard S, Johnson G, Karamihas S, Mink C. Heavy vehicle size and weight - test procedures for minimum safety performance standards. Final technical report, NHTSA, US DOT, contract DTNH22-87-D-17174, University of Michigan Transportation Research Institute, Report No. UMTRI-92-13; 1992]. To explore the effect of the transient response, a multiple cycle sine-wave steering input (MCSSI) manoeuvre is proposed. Simulation demonstrates that the steady-state RA measures of an MTAHV with and without ATS under the MCSSI manoeuvre are in excellent agreement with those under the closed-loop manoeuvre. It is indicated that between the two manoeuvres by ISO-14791, the closed-loop manoeuvre is more applicable for determining the RA measures of MTAHVs with ATS.

  3. Heterogeneous nucleation in multi-component vapor on a partially wettable charged conducting particle. II. The generalized Laplace, Gibbs-Kelvin, and Young equations and application to nucleation.

    PubMed

    Noppel, M; Vehkamäki, H; Winkler, P M; Kulmala, M; Wagner, P E

    2013-10-07

    Based on the results of a previous paper [M. Noppel, H. Vehkamäki, P. M. Winkler, M. Kulmala, and P. E. Wagner, J. Chem. Phys. 139, 134107 (2013)], we derive a thermodynamically consistent expression for reversible or minimal work needed to form a dielectric liquid nucleus of a new phase on a charged insoluble conducting sphere within a uniform macroscopic one- or multicomponent mother phase. The currently available model for ion-induced nucleation assumes complete spherical symmetry of the system, implying that the seed ion is immediately surrounded by the condensing liquid from all sides. We take a step further and treat more realistic geometries, where a cap-shaped liquid cluster forms on the surface of the seed particle. We derive the equilibrium conditions for such a cluster. The equalities of chemical potentials of each species between the nucleus and the vapor represent the conditions of chemical equilibrium. The generalized Young equation that relates contact angle with surface tensions, surface excess polarizations, and line tension, also containing the electrical contribution from triple line excess polarization, expresses the condition of thermodynamic equilibrium at three-phase contact line. The generalized Laplace equation gives the condition of mechanical equilibrium at vapor-liquid dividing surface: it relates generalized pressures in neighboring bulk phases at an interface with surface tension, excess surface polarization, and dielectric displacements in neighboring phases with two principal radii of surface curvature and curvatures of equipotential surfaces in neighboring phases at that point. We also re-express the generalized Laplace equation as a partial differential equation, which, along with electrostatic Laplace equations for bulk phases, determines the shape of a nucleus. We derive expressions that are suitable for calculations of the size and composition of a critical nucleus (generalized version of the classical Kelvin-Thomson equation).

  4. Regional climate change scenarios applied to viticultural zoning in Mendoza, Argentina

    NASA Astrophysics Data System (ADS)

    Cabré, María Fernanda; Quénol, Hervé; Nuñez, Mario

    2016-09-01

    Due to the importance of the winemaking sector in Mendoza, Argentina, the assessment of future scenarios for viticulture is of foremost relevance. In this context, it is important to understand how temperature increase and precipitation changes will impact on grapes, because of changes in grapevine phenology and suitability wine-growing regions must be understood as an indicator of climate change. The general objective is to classify the suitable areas of viticulture in Argentina for the current and future climate using the MM5 regional climate change simulations. The spatial distribution of annual mean temperature, annual rainfall, and some bioclimatic indices has been analyzed for the present (1970-1989) and future (2080-2099) climate under SRES A2 emission scenario. In general, according to projected average growing season temperature and Winkler index classification, the regional model estimates (i) a reduction of cool areas, (ii) a westward and southward displacement of intermediate and warm suitability areas, and (iii) the arise of new suitability regions (hot and very hot areas) over Argentina. In addition, an increase of annual accumulated precipitation is projected over the center-west of Argentina. Similar pattern of change is modeled for growing season, but with lower intensity. Furthermore, the evaluation of projected seasonal precipitation shows a little precipitation increase over Cuyo and center of Argentina in summer and a little precipitation decrease over Cuyo and northern Patagonia in winter. Results show that Argentina has a great potential for expansion into new suitable vineyard areas by the end of twenty-first century, particularly due to projected displacement to higher latitudes for most present suitability winegrowing regions. Even though main conclusions are based on one global-regional model downscaling, this approach provides valuable information for implementing proper and diverse adaptation measures in the Argentinean viticultural regions. It has been concluded that regional climate change simulations are an adequate methodology, and indeed, the MM5 regional model is an appropriate tool to be applied in viticultural zoning in Mendoza, Argentina.

  5. Repeated tracer tests in a karst system with concentrated allogenic recharge (Johnsbachtal, Austria)

    NASA Astrophysics Data System (ADS)

    Birk, Steffen; Wagner, Thomas; Pauritsch, Marcus; Winkler, Gerfried

    2015-04-01

    The Johnsbachtal (Austria) is a high Alpine headwater catchment covering an area of approximately 65 km², which is equipped with a hydrometeorological monitoring network (Strasser at al. 2013). The catchment is composed of carbonate rocks and crystalline rocks belonging to the Northern Calceraous Alps and the Greywacke Zone. The largest spring within the catchment, the Etzbach spring, is bound on karstified carbonate rocks of the Greywacke Zone. A stream sink located at a distance of approximately 1 km from the spring was used as injection point for repeated tracer tests in the years 2012, 2013, and 2014. In each case the tracer was recovered at the spring indicating an allogenic recharge component from the crystalline parts of the catchment. The spring discharge at the times of the three tracer tests varied between approximately 0.3 and 0.6 m³/s. Likewise the tracer travel times and thus the flow velocities were found to be different. Surprisingly, the largest tracer travel time (and thus lowest flow velocity) was obtained in 2013 when the spring discharge was highest (0.6 m³/s). In addition, the flow velocities in 2012 and 2014 were found to be clearly different, although the spring discharge was similar (roughly 0.3 m³/s) in both tests. Thus, the tracer velocity appears to be not correlated with the spring discharge. Field observations indicate that this finding can potentially be attributed to complexities at both the injection location (e.g., plugging of injection points and thus different flow paths) and the sampling point (i.e., the spring, which is composed of several outlet points representing different subcatchments). References: Strasser, U., Marke, T., Sass, O., Birk, S., Winkler, G. (2013): John's creek valley: a mountainous catchment for long-term interdisciplinary human-environment system research in Upper Styria (Austria). Environmental Earth Sciences, doi: 10.1007/s12665-013-2318-y

  6. [Quality of and Attendance at Healthy Child Clinics in Germany].

    PubMed

    Weithase, Alexandra; Vogel, Mandy; Kiep, Henriette; Schwarz, Sarah; Meißner, Laura; Herrmann, Janine; Rieger, Kristin; Koch, Christiane; Schuster, Volker; Kiess, Wieland

    2017-04-01

    Background  For several years the German healthy child clinics program has been a highly appreciated preventive measure and is subject to constant development. However, attendance depends on the families' sociodemographic situation. Findings are documented in a medical checkup booklet (the so-called Gelbes Heft). Currently, there is no procedure to use the data collected for epidemiological purposes nor to evaluate the pediatric prevention measures in Germany. Methods  Between 2011 and 2016, we recruited 3480 study participants for our population-based cohort study LIFE Child in Leipzig. 90.6 % submitted their check-up booklets which were subsequently scanned, the data was digitalized and transmitted to a computerized form. Furthermore, data on social status (so-called Winkler-Index) were collected for each family using a structured questionnaire. The study population consisted of the families' oldest child for whom both data sets were available. Results  The transmission of data from the check-up booklets was time-consuming and cost-intensive due to large datasets, uncoded diagnoses as well as the necessity of trained employees for transferring often illegible handwriting. Early diagnostic tests for children enjoy a high level of acceptance among all social classes. With increasing age, attendance rate decreases gradually. Only 83 % of the population with a lower social status attend the U9 test. The documentation of diagnoses in the check-up booklets was implausible because the frequency fluctuated heavily between the different check-up time points. With only less than 2 %, the documentation of psychosocial difficulties in a child was particularly surprising Conclusion  It is not possible to draw conclusions regarding the prevalence of target diseases from the frequency of documented findings in the check-up booklets. In order to make the data both comparable and evaluable, documentation must be digitalized in the future. © Georg Thieme Verlag KG Stuttgart · New York.

  7. [Concept of living conditions or social strata?--which approach is more suitable for describing unhealthy living circumstances of mothers?].

    PubMed

    Sperlich, S; Geyer, S

    2010-11-01

    The aim of this study was to identify living conditions associated with elevated health risks for mothers. Following up the debate on appropriate characterisation of social structure in modern societies, two different approaches, namely the 'social strata concept' and the 'concept of living conditions', were considered. Of particular interest was the question if the concept of living conditions, which is based on a broader definition of social status, allowed a more precise description of health-related living circumstances. The study was based on clinical data from 6,094 inpatients in Mother-Child rehabilitation centres in Germany. Taking socioeconomic status, household characteristics and psychosocial stressors into account seven typical living conditions of mothers could be identified by cluster analysis. Social status was measured by the Winkler Index. A moderate health-related gradient of increasing health risks with decreasing social position could be found for psychological and bodily disabilities. The approach of living conditions revealed that two living circumstances of mothers could be identified as being related to extremely poor health. These are i) dissatisfied single mothers with high degrees of psychosocial distress and lack of social support, and ii) married mothers with conflicts within the family and self-perceived lack of appreciation. Different from these findings, a pronounced social gradient could be found for overweight and smoking. Here the concept of social strata revealed in part excessive risks compared to the concept of living conditions. Overall, the integration of further social determinants allowed a more detailed insight into health-related living conditions, which are not solely determined by socioeconomic position. A global answer about the adequacy of the 'social strata concept' versus 'concept of living conditions' for identifying unhealthy living conditions could not be given because the relevance of both conceptual frameworks depended on the health outcomes considered. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Regional climate change scenarios applied to viticultural zoning in Mendoza, Argentina.

    PubMed

    Cabré, María Fernanda; Quénol, Hervé; Nuñez, Mario

    2016-09-01

    Due to the importance of the winemaking sector in Mendoza, Argentina, the assessment of future scenarios for viticulture is of foremost relevance. In this context, it is important to understand how temperature increase and precipitation changes will impact on grapes, because of changes in grapevine phenology and suitability wine-growing regions must be understood as an indicator of climate change. The general objective is to classify the suitable areas of viticulture in Argentina for the current and future climate using the MM5 regional climate change simulations. The spatial distribution of annual mean temperature, annual rainfall, and some bioclimatic indices has been analyzed for the present (1970-1989) and future (2080-2099) climate under SRES A2 emission scenario. In general, according to projected average growing season temperature and Winkler index classification, the regional model estimates (i) a reduction of cool areas, (ii) a westward and southward displacement of intermediate and warm suitability areas, and (iii) the arise of new suitability regions (hot and very hot areas) over Argentina. In addition, an increase of annual accumulated precipitation is projected over the center-west of Argentina. Similar pattern of change is modeled for growing season, but with lower intensity. Furthermore, the evaluation of projected seasonal precipitation shows a little precipitation increase over Cuyo and center of Argentina in summer and a little precipitation decrease over Cuyo and northern Patagonia in winter. Results show that Argentina has a great potential for expansion into new suitable vineyard areas by the end of twenty-first century, particularly due to projected displacement to higher latitudes for most present suitability winegrowing regions. Even though main conclusions are based on one global-regional model downscaling, this approach provides valuable information for implementing proper and diverse adaptation measures in the Argentinean viticultural regions. It has been concluded that regional climate change simulations are an adequate methodology, and indeed, the MM5 regional model is an appropriate tool to be applied in viticultural zoning in Mendoza, Argentina.

  9. Highlights of Astronomy

    NASA Astrophysics Data System (ADS)

    van der Hucht, Karel

    2008-02-01

    Preface Karel A. van der Hucht; Part I. Invited Discourses: Part II. Joint Discussions: 1. Particle acceleration - from Solar System to AGN Marian Karlicky and John C. Brown; 2. Pulsar emission and related phenomena Werner Becker, Janusz A. Gil and Bronislaw Rudak; 3. Solar activity regions and magnetic structure Debi Prasad Choudhary and Michal Sobotka; 4. The ultraviolet universe: Stars from birth to death Ana I. Gomez de Castro and Martin A. Barstow; 5. Calibrating the top of the stellar M-L relationship Claus Leitherer, Anthony F. J. Moat and Joachim Puls; 6. Neutron stars and black holes in star clusters Frederic A. Rasio; 7. The Universe at z > 6 Daniel Schaerer and Andrea Ferrara; 8. Solar and stellar activity cycles Klaus G. Strassmeier and Alexander Kosovichev; 9. Supernovae: One millennium after SN 1006 P. Frank Winkler, Wolfgang Hillebrandt and Brian P. Schmidt; 10. Progress in planetary exploration missions Guy J. Consolmagno; 11. Pre-solar grains as astrophysical tools Anja C. Andersen and John C. Lattanzio; 12. Long wavelength astrophysics T. Joseph W. Lazio and Namir E. Kassim; 13. Exploiting large surveys for galactic astronomy Christopher J. Corbally, Coryn A. L. Bailer-Jones, Sunetra Giridhar and Thomas H. Lloyd Evans; 14. Modeling dense stellar systems Alison I. Sills, Ladislav Subr and Simon F. Portegies Zwart; 15. New cosmology results from the Spitzer Space Telescope George Helou and David T. Frayer; 16. Nomenclature, precession and new models in fundamental astronomy Nicole Capitaine, Jan Vondrak & James L. Hilton; 17. Highlights of recent progress in seismology of the Sun and Sun-like stars John W. Leibacher and Michael J. Thompson; Part III. Special Sessions: SpS 1. Large astronomical facilities of the next decade Gerard F. Gilmore and Richard T. Schilizzi; SpS 2. Innovation in teaching and learning astronomy methods Rosa M. Ros and Jay M. Pasachoff; SpS 3. The Virtual Observatory in action: New science, new technology and next-generation facilities Nicholas A. Walton, Andrew Lawrence & Roy Williams; SpS 5. Astronomy for the developing world John B. Hearnshaw and Peter Martinez; SpS 6. Astronomical data management Raymond P. Norris; SpS 7. Astronomy in Antarctica Michael G. Burton; Author index.

  10. Highlights of Astronomy, Volume 14

    NASA Astrophysics Data System (ADS)

    van der Hucht, Karel

    2007-08-01

    Preface Karel A. van der Hucht; Part I. Invited Discourses: Part II. Joint Discussions: 1. Particle acceleration - from Solar System to AGN Marian Karlicky and John C. Brown; 2. Pulsar emission and related phenomena Werner Becker, Janusz A. Gil and Bronislaw Rudak; 3. Solar activity regions and magnetic structure Debi Prasad Choudhary and Michal Sobotka; 4. The ultraviolet universe: Stars from birth to death Ana I. Gomez de Castro and Martin A. Barstow; 5. Calibrating the top of the stellar M-L relationship Claus Leitherer, Anthony F. J. Moat and Joachim Puls; 6. Neutron stars and black holes in star clusters Frederic A. Rasio; 7. The Universe at z > 6 Daniel Schaerer and Andrea Ferrara; 8. Solar and stellar activity cycles Klaus G. Strassmeier and Alexander Kosovichev; 9. Supernovae: One millennium after SN 1006 P. Frank Winkler, Wolfgang Hillebrandt and Brian P. Schmidt; 10. Progress in planetary exploration missions Guy J. Consolmagno; 11. Pre-solar grains as astrophysical tools Anja C. Andersen and John C. Lattanzio; 12. Long wavelength astrophysics T. Joseph W. Lazio and Namir E. Kassim; 13. Exploiting large surveys for galactic astronomy Christopher J. Corbally, Coryn A. L. Bailer-Jones, Sunetra Giridhar and Thomas H. Lloyd Evans; 14. Modeling dense stellar systems Alison I. Sills, Ladislav Subr and Simon F. Portegies Zwart; 15. New cosmology results from the Spitzer Space Telescope George Helou and David T. Frayer; 16. Nomenclature, precession and new models in fundamental astronomy Nicole Capitaine, Jan Vondrak & James L. Hilton; 17. Highlights of recent progress in seismology of the Sun and Sun-like stars John W. Leibacher and Michael J. Thompson; Part III. Special Sessions: SpS 1. Large astronomical facilities of the next decade Gerard F. Gilmore and Richard T. Schilizzi; SpS 2. Innovation in teaching and learning astronomy methods Rosa M. Ros and Jay M. Pasachoff; SpS 3. The Virtual Observatory in action: New science, new technology and next-generation facilities Nicholas A. Walton, Andrew Lawrence & Roy Williams; SpS 5. Astronomy for the developing world John B. Hearnshaw and Peter Martinez; SpS 6. Astronomical data management Raymond P. Norris; SpS 7. Astronomy in Antarctica Michael G. Burton; Author index.

  11. Field guide to the Mesozoic accretionary complex along Turnagain Arm and Kachemak Bay, south-central Alaska

    USGS Publications Warehouse

    Bradley, Dwight C.; Kusky, Timothy M.; Karl, Susan M.; Haeussler, Peter J.

    1997-01-01

    Turnagain Arm, just east of Anchorage, provides a readily accessible, world-class cross section through a Mesozoic accretionary wedge. Nearly continuous exposures along the Seward Highway, the Alaska Railroad, and the shoreline of Turnagain Arm display the two main constituent units of the Chugach terrane: the McHugh Complex and Valdez Group. In this paper we describe seven bedrock geology stops along Turnagain Arm, and two others in the Chugach Mountains just to the north (Stops 1-7 and 9), which will be visited as part of the May, 1997 field trip of the Alaska Geological Society. Outcrops along Turnagain Arm have already been described in two excellent guidebook articles (Clark, 1981; Winkler and others 1984), both of which remain as useful and valid today as when first published. Since the early 1980's, studies along Turnagain Arm have addressed radiolarian ages of chert and conodont ages of limestone in the McHugh Complex (Nelson and others, 1986, 1987); geochemistry of basalt in the McHugh Complex (Nelson and Blome, 1991); post-accretion brittle faulting (Bradley and Kusky, 1990; Kusky and others, 1997); and the age and tectonic setting of gold mineralization (Haeussler and others, 1995). Highlights of these newer findings will described both in the text below, and in the stop descriptions.Superb exposures along the southeastern shore of Kachemak Bay show several other features of the McHugh Complex that are either absent or less convincing along Turnagain Arm. While none of these outcrops can be reached via the main road network, they are still reasonably accessible - all are within an hour by motorboat from Homer, seas permitting. Here, we describe seven outcrops along the shore of Kachemak Bay that we studied between 1989 and 1993 during geologic mapping of the Seldovia 1:250,000- scale quadrangle. These outcrops (Stops 61-67) will not be part of the 1997 itinerary, but are included here tor the benefit of those who may wish to visit them later.

  12. Comparison of the performance and reliability of 18 lumped hydrological models driven by ECMWF rainfall ensemble forecasts: a case study on 29 French catchments

    NASA Astrophysics Data System (ADS)

    Velázquez, Juan Alberto; Anctil, François; Ramos, Maria-Helena; Perrin, Charles

    2010-05-01

    An ensemble forecasting system seeks to assess and to communicate the uncertainty of hydrological predictions by proposing, at each time step, an ensemble of forecasts from which one can estimate the probability distribution of the predictant (the probabilistic forecast), in contrast with a single estimate of the flow, for which no distribution is obtainable (the deterministic forecast). In the past years, efforts towards the development of probabilistic hydrological prediction systems were made with the adoption of ensembles of numerical weather predictions (NWPs). The additional information provided by the different available Ensemble Prediction Systems (EPS) was evaluated in a hydrological context on various case studies (see the review by Cloke and Pappenberger, 2009). For example, the European ECMWF-EPS was explored in case studies by Roulin et al. (2005), Bartholmes et al. (2005), Jaun et al. (2008), and Renner et al. (2009). The Canadian EC-EPS was also evaluated by Velázquez et al. (2009). Most of these case studies investigate the ensemble predictions of a given hydrological model, set up over a limited number of catchments. Uncertainty from weather predictions is assessed through the use of meteorological ensembles. However, uncertainty from the tested hydrological model and statistical robustness of the forecasting system when coping with different hydro-meteorological conditions are less frequently evaluated. The aim of this study is to evaluate and compare the performance and the reliability of 18 lumped hydrological models applied to a large number of catchments in an operational ensemble forecasting context. Some of these models were evaluated in a previous study (Perrin et al. 2001) for their ability to simulate streamflow. Results demonstrated that very simple models can achieve a level of performance almost as high (sometimes higher) as models with more parameters. In the present study, we focus on the ability of the hydrological models to provide reliable probabilistic forecasts of streamflow, based on ensemble weather predictions. The models were therefore adapted to run in a forecasting mode, i.e., to update initial conditions according to the last observed discharge at the time of the forecast, and to cope with ensemble weather scenarios. All models are lumped, i.e., the hydrological behavior is integrated over the spatial scale of the catchment, and run at daily time steps. The complexity of tested models varies between 3 and 13 parameters. The models are tested on 29 French catchments. Daily streamflow time series extend over 17 months, from March 2005 to July 2006. Catchment areas range between 1470 km2 and 9390 km2, and represent a variety of hydrological and meteorological conditions. The 12 UTC 10-day ECMWF rainfall ensemble (51 members) was used, which led to daily streamflow forecasts for a 9-day lead time. In order to assess the performance and reliability of the hydrological ensemble predictions, we computed the Continuous Ranked probability Score (CRPS) (Matheson and Winkler, 1976), as well as the reliability diagram (e.g. Wilks, 1995) and the rank histogram (Talagrand et al., 1999). Since the ECMWF deterministic forecasts are also available, the performance of the hydrological forecasting systems was also evaluated by comparing the deterministic score (MAE) with the probabilistic score (CRPS). The results obtained for the 18 hydrological models and the 29 studied catchments are discussed in the perspective of improving the operational use of ensemble forecasting in hydrology. References Bartholmes, J. and Todini, E.: Coupling meteorological and hydrological models for flood forecasting, Hydrol. Earth Syst. Sci., 9, 333-346, 2005. Cloke, H. and Pappenberger, F.: Ensemble Flood Forecasting: A Review. Journal of Hydrology 375 (3-4): 613-626, 2009. Jaun, S., Ahrens, B., Walser, A., Ewen, T., and Schär, C.: A probabilistic view on the August 2005 floods in the upper Rhine catchment, Nat. Hazards Earth Syst. Sci., 8, 281-291, 2008. Matheson, J. E. and Winkler, R. L.: Scoring rules for continuous probability distributions, Manage Sci., 22, 1087-1096, 1976. Perrin, C., Michel C. and Andréassian,V. Does a large number of parameters enhance model performance? Comparative assessment of common catchment model structures on 429 catchments, J. Hydrol., 242, 275-301, 2001. Renner, M., Werner, M. G. F., Rademacher, S., and Sprokkereef, E.: Verification of ensemble flow forecast for the River Rhine, J. Hydrol., 376, 463-475, 2009. Roulin, E. and Vannitsem, S.: Skill of medium-range hydrological ensemble predictions, J. Hydrometeorol., 6, 729-744, 2005. Talagrand, O., Vautard, R., and Strauss, B.: Evaluation of the probabilistic prediction systems, in: Proceedings, ECMWF Workshop on Predictability, Shinfield Park, Reading, Berkshire, ECMWF, 1-25, 1999. Velázquez, J.A., Petit, T., Lavoie, A., Boucher M.-A., Turcotte R., Fortin V., and Anctil, F. : An evaluation of the Canadian global meteorological ensemble prediction system for short-term hydrological forecasting, Hydrol. Earth Syst. Sci., 13, 2221-2231, 2009. Wilks, D. S.: Statistical Methods in the Atmospheric Sciences, Academic Press, San Diego, CA, 465 pp., 1995.

  13. Water: from the source to the treatment plan

    NASA Astrophysics Data System (ADS)

    Marquet, V.; Baude, I.

    2012-04-01

    As a biology and geology teacher, I have worked on water, from the source to the treatment plant, with pupils between 14 and 15 years old. Lesson 1. Introduction, the water in Vienna Aim: The pupils have to consider why the water is so important in Vienna (history, economy etc.) Activities: Brainstorming about where and why we use water every day and why the water is different in Vienna. Lesson 2. Soil, rock and water Aim: Permeability/ impermeability of the different layers of earth Activities: The pupils have measure the permeability and porosity of different stones: granite, clay, sand, carbonate and basalt. Lesson 3. Relationship between water's ion composition and the stone's mineralogy Aim: Each water source has the same ion composition as the soil where the water comes from. Activities: Comparison between the stone's mineralogy and ions in water. They had a diagram with the ions of granite, clay, sand, carbonate and basalt and the label of different water. They had to make hypotheses about the type of soil where the water came from. They verified this with a geology map of France and Austria. They have to make a profile of the area where the water comes from. They had to confirm or reject their hypothesis. Lesson 4 .Water-catchment and reservoir rocks Aim: Construction of a confined aquifer and artesian well Activities: With sand, clay and a basin, they have to model a confined aquifer and make an artesian well, using what they have learned in lesson 2. Lesson 5. Organic material breakdown and it's affect on the oxygen levels in an aquatic ecosystem Aim: Evaluate the relationship between oxygen levels and the amount of organic matter in an aquatic ecosystem. Explain the relationship between oxygen levels, bacteria and the breakdown of organic matter using an indicator solution. Activities: Put 5 ml of a different water sample in each tube with 20 drops of methylene blue. Observe the tubes after 1 month. Lesson 6. Visit to the biggest water treatment plant in Europe in Vienna Lesson 7 Water Quality Monitoring: Biochemical Oxygen Demand Aim: Measure the quantity of oxygen used by microorganisms in the oxidation of organic matter for different water; downstream and upstream of polluting refuse, after addition of glucose, milk or humus in the water. Activities: After dissolution of the different samples of water they measure the dissolved oxygen with the Winkler Method.

  14. Long-Term Air Pollution and Traffic Noise Exposures and Mild Cognitive Impairment in Older Adults: A Cross-Sectional Analysis of the Heinz Nixdorf Recall Study.

    PubMed

    Tzivian, Lilian; Dlugaj, Martha; Winkler, Angela; Weinmayr, Gudrun; Hennig, Frauke; Fuks, Kateryna B; Vossoughi, Mohammad; Schikowski, Tamara; Weimar, Christian; Erbel, Raimund; Jöckel, Karl-Heinz; Moebus, Susanne; Hoffmann, Barbara

    2016-09-01

    Mild cognitive impairment (MCI) describes the intermediate state between normal cognitive aging and dementia. Adverse effects of air pollution (AP) on cognitive functions have been proposed, but investigations of simultaneous exposure to noise are scarce. We analyzed the cross-sectional associations of long-term exposure to AP and traffic noise with overall MCI and amnestic (aMCI) and nonamnestic (naMCI) MCI. At the second examination of the population-based Heinz Nixdorf Recall study, cognitive assessment was completed in 4,086 participants who were 50-80 years old. Of these, 592 participants were diagnosed as having MCI (aMCI, n = 309; naMCI, n = 283) according to previously published criteria using five neuropsychological subtests. We assessed long-term residential concentrations for size-fractioned particulate matter (PM) and nitrogen oxides with land use regression, and for traffic noise [weighted 24-hr (LDEN) and night-time (LNIGHT) means]. Logistic regression models adjusted for individual risk factors were calculated to estimate the association of environmental exposures with MCI in single- and two-exposure models. Most air pollutants and traffic noise were associated with overall MCI and aMCI. For example, an interquartile range increase in PM2.5 and a 10 A-weighted decibel [dB(A)] increase in LDEN were associated with overall MCI as follows [odds ratio (95% confidence interval)]: 1.16 (1.05, 1.27) and 1.40 (1.03, 1.91), respectively, and with aMCI as follows: 1.22 (1.08, 1.38) and 1.53 (1.05, 2.24), respectively. In two-exposure models, AP and noise associations were attenuated [e.g., for aMCI, PM2.5 1.13 (0.98, 1.30) and LDEN 1.46 (1.11, 1.92)]. Long-term exposures to air pollution and traffic noise were positively associated with MCI, mainly with the amnestic subtype. Tzivian L, Dlugaj M, Winkler A, Weinmayr G, Hennig F, Fuks KB, Vossoughi M, Schikowski T, Weimar C, Erbel R, Jöckel KH, Moebus S, Hoffmann B, on behalf of the Heinz Nixdorf Recall study Investigative Group. 2016. Long-term air pollution and traffic noise exposures and mild cognitive impairment in older adults: a cross-sectional analysis of the Heinz Nixdorf Recall Study. Environ Health Perspect 124:1361-1368; http://dx.doi.org/10.1289/ehp.1509824.

  15. Environmental magnetism and magnetic mapping of urban metallic pollution (Paris, France)

    NASA Astrophysics Data System (ADS)

    Isambert, Aude; Franke, Christine; Macouin, Mélina; Rousse, Sonia; Philip, Aurélio; de Villeneuve, Sybille Henry

    2017-04-01

    Airborne pollution in dense urban areas is nowadays a subject of major concern. Fine particulate pollution events are ever more frequent and represent not only an environmental and health but also a real economic issue. In urban atmosphere, the so-called PM2.5 (particulate matter < 2.5 μm in diameter) and ultrafine fractions (< 100 nm) due to combustion, causes many adverse health effects. Environmental magnetic studies of airborne PM collected on air filters or plants have demonstrated their potential to follow the metallic pollution and determine their sources (Sagnotti et al., 2012). In this study, we report on magnetic measurements of traffic-related airborne PM in the city of Paris, France. Two distinct environments were sampled and analyzed along the Seine River: the aquatic environment in studying fluvial bank and river bed sediments and the atmospheric environment by regarding magnetic particles trapped in adjacent tree barks (Platanus hispanica). About 50 sediment samples and 350 bark samples have been collected and analysed to determine their magnetic properties (susceptibility, hysteresis parameters, IRM, frequency-dependent susceptibility) and to estimate the presence and spatial concentration of superparamagnetic or multi-domain particles for each sample type. The bark results allow proposing a high spatial resolution mapping (< 50 m) of magnetic susceptibility and frequency dependent susceptibility on a 30 km long profile along the river. Variations in that profile may be linked to the atmospheric metallic pollution. In addition to that, the sampling of banks and riverbed sediments of the Seine allow a global estimation on the anthropogenic versus detrital and biologic input in the city of Paris. The first results presented here show a general increase of the concentration in magnetic particles from upstream to downstream Paris probably linked to urban pollutions as previously observed for suspended particulate matter (Franke et al. 2009; Kayvantash, 2016). Sagnotti, L., & Winkler, A. (2012). On the magnetic characterization and quantification of the superparamagnetic fraction of traffic-related urban airborne PM in Rome, Italy. Atmospheric environment, 59, 131-140. Franke, C., Kissel, C., Robin, E., Bonté, P., & Lagroix, F. (2009). Magnetic particle characterization in the Seine river system: Implications for the determination of natural versus anthropogenic input. Geochemistry, Geophysics, Geosystems, 10(8). Kayvantash, D., 2016. Characterization of ferruginous particles in the Seine River using environmental magnetism, Ph.D. thesis, MINES ParisTech/LSCE, France.

  16. OXYGEN-RICH SUPERNOVA REMNANT IN THE LARGE MAGELLANIC CLOUD

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This is a NASA Hubble Space Telescope image of the tattered debris of a star that exploded 3,000 years ago as a supernova. This supernova remnant, called N132D, lies 169,000 light-years away in the satellite galaxy, the Large Magellanic Cloud. A Hubble Wide Field Planetary Camera 2 image of the inner regions of the supernova remnant shows the complex collisions that take place as fast moving ejecta slam into cool, dense interstellar clouds. This level of detail in the expanding filaments could only be seen previously in much closer supernova remnants. Now, Hubble's capabilities extend the detailed study of supernovae out to the distance of a neighboring galaxy. Material thrown out from the interior of the exploded star at velocities of more than four million miles per hour (2,000 kilometers per second) plows into neighboring clouds to create luminescent shock fronts. The blue-green filaments in the image correspond to oxygen-rich gas ejected from the core of the star. The oxygen-rich filaments glow as they pass through a network of shock fronts reflected off dense interstellar clouds that surrounded the exploded star. These dense clouds, which appear as reddish filaments, also glow as the shock wave from the supernova crushes and heats the clouds. Supernova remnants provide a rare opportunity to observe directly the interiors of stars far more massive than our Sun. The precursor star to this remnant, which was located slightly below and left of center in the image, is estimated to have been 25 times the mass of our Sun. These stars 'cook' heavier elements through nuclear fusion, including oxygen, nitrogen, carbon, iron etc., and the titanic supernova explosions scatter this material back into space where it is used to create new generations of stars. This is the mechanism by which the gas and dust that formed our solar system became enriched with the elements that sustain life on this planet. Hubble spectroscopic observations will be used to determine the exact chemical composition of this nuclear- processed material, and thereby test theories of stellar evolution. The image shows a region of the remnant 50 light-years across. The supernova explosion should have been visible from Earth's southern hemisphere around 1,000 B.C., but there are no known historical records that chronicle what would have appeared as a 'new star' in the heavens. This 'true color' picture was made by superposing images taken on 9-10 August 1994 in three of the strongest optical emission lines: singly ionized sulfur (red), doubly ionized oxygen (green), and singly ionized oxygen (blue). Photo credit: Jon A. Morse (STScI) and NASA Investigating team: William P. Blair (PI; JHU), Michael A. Dopita (MSSSO), Robert P. Kirshner (Harvard), Knox S. Long (STScI), Jon A. Morse (STScI), John C. Raymond (SAO), Ralph S. Sutherland (UC-Boulder), and P. Frank Winkler (Middlebury). Image files in GIF and JPEG format may be accessed via anonymous ftp from oposite.stsci.edu in /pubinfo: GIF: /pubinfo/GIF/N132D.GIF JPEG: /pubinfo/JPEG/N132D.jpg The same images are available via World Wide Web from links in URL http://www.stsci.edu/public.html.

  17. Interannual variability of Dissolved Oxygen values around the Balearic Islands

    NASA Astrophysics Data System (ADS)

    Balbín, R.; Aparicio, A.; López-Jurado, J. L.; Flexas, M. M.

    2012-04-01

    Periodic movements of the trawl fishing fleet at Mallorca Island suggest a seasonal variability of the demersal resources, associated with hydrodynamic variability. The area where these commercial fisheries operate extends from the north to the southeast of Mallorca channel, between Mallorca and Ibiza Islands. It is thus affected by the different hydrodynamic conditions of the two sub-basins of the western Mediterranean (the Balearic and the Algerian sub-basins), with different geomorphologic and hydrodynamic characteristics. To characterize this hydrodynamic variability, hydrographic data collected around the Balearic Islands since 2001 with CTDs were analized [1]. Hydrographic parameters were processed according to the standard protocols. Dissolved oxygen (DO) was calibrated onboard using the winkler method. Temperature and salinity were used to characterize the different water masses. At the Western Mediterranean, the maximum values of DO in the water column are observed in the sur- face waters during winter (> 6.0 ml /l), when these water in contact with the atmosphere absorb large amount of oxygen, favored by low winter temperatures and notable turbulence. Later in the spring, the gradual increase of temperature, and the beginning of stratification and biological activity, lead to a decrease of oxygen concentration mainly in surface waters. During summer, these values continue to reduce in the surface mixed layer. Below it, and due to the biological activity, an increase is observed, giving rise to the absolute maximum of this parameter (> 6.5 ml /l). During autumn, the atmospheric forcing breaks the stratification producing a homogenization of surface water. At this moment, DO shows intermediate values. Below the surface waters, about 200 m, a relative maximum corresponding to the seasonal Winter Intermediate Waters (WIW) can be observed. Intermediate waters, between 400 and 600 m, reveal an oxygen minimum (4.0 ml /l) associated to the Levantine Intermediate Waters (LIW) and underneath, the Western Mediterranean Deep Waters (WMDW) show a slight increase of these values (> 4.5 ml /l). Interannual variability of DO at the Balearic and the Algerian sub-basins and in the different water masses will be presented. A systematic difference (> 0.10 ml/l) is observed at intermediate and deep waters between the oxygen con- tent in the Balearic and Algerian sub-basins. This could be explained in terms of the longer path these water masses have to cover around the Mallorca and Menorca Islands, which implies a longer residence time and consumption as a result of respiration and decay of organic matter. During some campaigns minimum DO values (≈ 3.8 ml/l) were found in this area which are smaller that the values usually reported for the Mediterranean [2, 3, 4]. Different possible causes as the influence of the Easter Mediterranean Transient, the reported increase of surface temperature or just the interannual variability, will be discussed. [1] J. L. López-Jurado, J. M. García-Lafuente, L. Cano, et al., Oceanologica acta, vol. 18, no. 2, 1995. [2] T. Packard, H. Minas, B. Coste, R. Martinez, M. Bonin, J. Gostan, P. Garfield, J. Christensen, Q. Dortch, M. Minas, et al., Deep Sea Research Part A. Oceanographic Research Papers, vol. 35, no. 7, 1988. [3] B. Manca, M. Burca, A. Giorgetti, C. Coatanoan, M. Garcia,and A. Iona, Journal of marine systems, vol. 48, no. 1-4, 2004. [4] A. Miller, "Mediterranean sea atlas of temperature, salinity, and oxygen. profiles and data from cruises of RV Atlantis and RV Chain," tech. rep., Woods Hole Oceanographic Institution, Massachusetts, 1970.

  18. One year of Seaglider dissolved oxygen concentration profiles at the PAP site

    NASA Astrophysics Data System (ADS)

    Binetti, Umberto; Kaiser, Jan; Heywood, Karen; Damerell, Gillian; Rumyantseva, Anna

    2015-04-01

    Oxygen is one of the most important variables measured in oceanography, influenced both by physical and biological factors. During the OSMOSIS project, 7 Seagliders were used in 3 subsequent missions to measure a multidisciplinary suite of parameters at high frequency in the top 1000 m of the water column for one year, from September 2012 to September 2013. The gliders were deployed at the PAP time series station (nominally at 49° N 16.5° W) and surveyed the area following a butterfly-shaped path. Oxygen concentration was measured by Aanderaa optodes and calibrated using ship CTD O2 profiles during 5 deployment and recovery cruises, which were in turn calibrated by Winkler titration of discrete samples. The oxygen-rich mixed layer deepens in fall and winter and gets richer in oxygen when the temperature decreases. The spring bloom did not happen as expected, but instead the presence of a series of small blooms was measured throughout spring and early summer. During the summer the mixed layer become very shallow and oxygen concentrations decreased. A Deep Oxygen Maximum (DOM) developed along with a deep chlorophyll maximum during the summer and was located just below the mixed layer . At this depth, phytoplankton had favourable light and nutrient conditions to grow and produce oxygen, which was not subject to immediate outgassing. The oxygen concentration in the DOM was not constant, but decreased, then increased again until the end of the mission. Intrusions of oxygen rich water are also visible throughout the mission. These are probably due to mesoscale events through the horizontal transport of oxygen and/or nutrients that can enhance productivity, particularly at the edge of the fronts. We calculate net community production (NCP) by analysing the variation in oxygen with time. Two methods have been proposed. The classical oxygen budget method assumes that changes in oxygen are due to the sum of air-sea flux, isopycnal advection, diapycnal mixing and NCP. ERA-Interim provides climatological data to calculate air-sea gas exchange fluxes based on wind-speed parameterisations of the gas exchange coefficient. The second method exploits the high frequency of the measurements to determine the increment of oxygen over time during daylight hours to measure NCP. Together with the O2 concentration decrease during the night (due to community respiration), this method also allows us to derive gross oxygen production rates. The results of these two methods are compared.

  19. Long-Term Air Pollution and Traffic Noise Exposures and Mild Cognitive Impairment in Older Adults: A Cross-Sectional Analysis of the Heinz Nixdorf Recall Study

    PubMed Central

    Tzivian, Lilian; Dlugaj, Martha; Winkler, Angela; Weinmayr, Gudrun; Hennig, Frauke; Fuks, Kateryna B.; Vossoughi, Mohammad; Schikowski, Tamara; Weimar, Christian; Erbel, Raimund; Jöckel, Karl-Heinz; Moebus, Susanne; Hoffmann, Barbara

    2016-01-01

    Background: Mild cognitive impairment (MCI) describes the intermediate state between normal cognitive aging and dementia. Adverse effects of air pollution (AP) on cognitive functions have been proposed, but investigations of simultaneous exposure to noise are scarce. Objectives: We analyzed the cross-sectional associations of long-term exposure to AP and traffic noise with overall MCI and amnestic (aMCI) and nonamnestic (naMCI) MCI. Methods: At the second examination of the population-based Heinz Nixdorf Recall study, cognitive assessment was completed in 4,086 participants who were 50–80 years old. Of these, 592 participants were diagnosed as having MCI (aMCI, n = 309; naMCI, n = 283) according to previously published criteria using five neuropsychological subtests. We assessed long-term residential concentrations for size-fractioned particulate matter (PM) and nitrogen oxides with land use regression, and for traffic noise [weighted 24-hr (LDEN) and night-time (LNIGHT) means]. Logistic regression models adjusted for individual risk factors were calculated to estimate the association of environmental exposures with MCI in single- and two-exposure models. Results: Most air pollutants and traffic noise were associated with overall MCI and aMCI. For example, an interquartile range increase in PM2.5 and a 10 A-weighted decibel [dB(A)] increase in LDEN were associated with overall MCI as follows [odds ratio (95% confidence interval)]: 1.16 (1.05, 1.27) and 1.40 (1.03, 1.91), respectively, and with aMCI as follows: 1.22 (1.08, 1.38) and 1.53 (1.05, 2.24), respectively. In two-exposure models, AP and noise associations were attenuated [e.g., for aMCI, PM2.5 1.13 (0.98, 1.30) and LDEN 1.46 (1.11, 1.92)]. Conclusions: Long-term exposures to air pollution and traffic noise were positively associated with MCI, mainly with the amnestic subtype. Citation: Tzivian L, Dlugaj M, Winkler A, Weinmayr G, Hennig F, Fuks KB, Vossoughi M, Schikowski T, Weimar C, Erbel R, Jöckel KH, Moebus S, Hoffmann B, on behalf of the Heinz Nixdorf Recall study Investigative Group. 2016. Long-term air pollution and traffic noise exposures and mild cognitive impairment in older adults: a cross-sectional analysis of the Heinz Nixdorf Recall Study. Environ Health Perspect 124:1361–1368; http://dx.doi.org/10.1289/ehp.1509824 PMID:26863687

  20. Nutrient dynamics and primary production in a pristine coastal mangrove ecosystem: Andaman Islands, India

    NASA Astrophysics Data System (ADS)

    Jenkins, E. N.; Nickodem, K.; Siemann, A. L.; Hoeher, A.; Sundareshwar, P. V.; Ramesh, R.; Purvaja, R.; Banerjee, K.; Manickam, S.; Haran, H.

    2012-12-01

    Mangrove ecosystems play a key role in supporting coastal food webs and nutrient cycles in the coastal zone. Their strategic position between the land and the sea make them important sites for land-ocean interaction. As part of an Indo-US summer field course we investigated changes in the water chemistry in a pristine mangrove creek located at Wright Myo in the Andaman Islands, India. This study was conducted during the wet season (June 2012) to evaluate the influence of the coastal mangrove wetlands on the water quality and productivity in adjoining pelagic waters. Over a full tidal cycle spanning approximately 24 hrs, we measured nutrient concentrations and other ancillary parameters (e.g. dissolved oxygen, turbidity, salinity, etc.) hourly to evaluate water quality changes in incoming and ebbing tides. Nutrient analyses had the following concentration ranges (μM): nitrite 0.2-0.9, nitrate 2.0-11.5, ammonium 1.3-7.5, dissolved inorganic phosphate 0.7-2.8. The dissolved inorganic nitrogen to dissolved inorganic phosphate (DIN/DIP) ratio was very low relative to an optimal ratio, suggesting growth is nitrogen limited. In addition, we conducted primary production assays to investigate the factors that controlled primary production in this pristine creek. The experiment was carried out in situ using the Winkler method at low and high tide. Four-hour incubation of light and dark bottles representing a fixed control, non-fertilized, fertilized with nitrate, and fertilized with phosphate enabled the measurement of both net oxygen production and dark respiration. The low tide experiment suggests the ecosystem is heterotrophic because the oxygen measured in the light bottles was consistently less than that of the dark bottles. This result may be an experimental artifact of placing the glass bottles in the sun for too long prior to incubation, potentially leading to photolysis of large organic molecules in the light bottles. The high tide experiment also displayed counterintuitive results because less oxygen was produced with nutrient addition relative to the unfertilized samples. Furthermore, community respiration increased slightly in the presence of nitrogen (N) but increased more so in the presence of phosphorus (P), indicating P limits respiration. N and P did not stimulate production but did stimulate consumption. Despite the low DIN/DIP ratio suggesting a N limitation in the system, N addition failed to stimulate primary production. Production at Wright Myo creek is therefore not limited by nutrients but is controlled by other conditions, possibly by a rain flushing event that occurred prior to the high tide primary production experiment or by light availability. Because light must be able to penetrate through the water column to drive photosynthesis, low light availability and high turbidity may have limited production.

  1. EDITORIAL: Colloidal dispersions in external fields Colloidal dispersions in external fields

    NASA Astrophysics Data System (ADS)

    Löwen, Hartmut

    2012-11-01

    Colloidal dispersions have long been proven as pivotal model systems for equilibrium phase transition such as crystallization, melting and liquid-gas phase transition. The last decades have revealed that this is also true for nonequilibrium phenomena. In fact, the fascinating possibility to track the individual trajectories of colloidal particles has greatly advanced our understanding of collective behaviour in classical many-body systems and has helped to reveal the underlying physical principles of glass transition, crystal nucleation, and interfacial dynamics (to name just a few typical nonequilibrium effects). External fields can be used to bring colloids out of equilibrium in a controlled way. Different kinds of external fields can be applied to colloidal dispersions, namely shear flow, electric, magnetic and laser-optical fields, and confinement. Typical research areas can be sketched with the by now traditional complexity diagram (figure 1). The complexity of the colloidal system itself as embodied in statistical degrees of freedom is shown on the x-axis while the complexity of the problem posed, namely bulk, an inhomogeneity in equilibrium, steady state nonequilibrium and full time-dependent nonequilibrium are shown on the y-axis. The different external fields which can be imposed are indicated by the different hatched areas. figure1 Figure 1. Diagram of complexity for colloidal dispersions in external fields: while the x-axis shows the complexity of the system, the y-axis shows the complexity of the problem. Regions which can be accessed by different kinds of external fields are indicated. The arrows indicate recent research directions. Active particles are also indicated with a special complexity of internal degrees of freedom [1]. This collection of papers reflects the scientific programme of the International Conference on Colloidal Dispersions in External Fields III (CODEF III) which took place in Bonn-Bad Godesberg from 20-23 March 2012. This was the third conference in a series that began in 2004 [2] and was continued in 2008 [3]. The CODEF meeting series is held in conjunction with the German Dutch Transregional Collaborative Research Centre SFB TR6 with the title Physics of Colloidal Dispersions in External Fields. Papers from scientists working within this network as well as those from further invited contributors are summarized in this issue. They are organized according to the type of field applied, namely: shear flow electric field laser-optical and magnetic field confinement other fields and active particles To summarize the highlights of this special issue as regards shear fields, the response of depletion-induced colloidal clusters to shear is explored in [4]. Soft particles deform under shear and their structural and dynamical behaviour is studied both by experiment [5] and theory [6]. Transient dynamics after switching on shear is described by a joint venture of theory, simulation and experiment in [7]. Colloids provide the fascinating possibility to drag single particles through the suspension, which gives access to microrheology (as opposed to macrorheology, where macroscopic boundaries are moved). Several theoretical aspects of microrheology are discussed in this issue [8-10]. Moreover, a microscopic theory for shear viscosity is presented [11]. Various aspects of colloids in electric fields are also included in this issue. Electrokinetic phenomena for charged suspensions couple flow and electric phenomena in an intricate way and are intensely discussed both by experiment and simulation in contributions [12-14]. Dielectric phenomena are also influenced by electric fields [15]. Electric fields can induce effective dipolar forces between colloids leading to string formation [16]. Finally, binary mixtures in an electric driving field exhibit laning [17]. Simulation [18] and theoretical [19] studies of this nonequilibrium phenomenon are also discussed in this issue. Laser-optical fields can be used to tailor a random substrate potential for colloids [20] or to bind colloids optically [21]. External magnetic fields are typically used to create dipolar repulsions of colloids pending at an air-water interface. This provides an avenue to two-dimensional systems, where the freezing transition [22] and various transport phenomena through channels are the focus of recent research [23, 24]. Confinement typically leads to interfaces. The classical problem of the Tolman length for a fluid-fluid interface is reviewed in detail in [25]. In fact, colloid-polymer mixtures constitute ideal model systems for liquid-gas interfaces in various geometries [26] and are also suitable for measuring the Tolman length experimentally. Crystalline phases in confinement [27] and crystal-fluid interfaces [28] are even more complex due to the inhomogeneity of the solid phase. Also in the confined fluid phase, there are still open issues in slit-pore geometry. These include how to scale the interparticle distance [29] and how to measure hydrodynamic interactions between colloidal particles [30]. Other external fields which can be applied to colloids are gravity [31] and temperature [32]. An important field of recently emerging research is active colloidal particles (so-called microswimmers) which possess fascinating nonequilibrium properties; for recent reviews see [33-35]. Two examples are also included in this issue: an active deformable particle [36] moving in gravity and the collective turbulent swarming behaviour of dense self-propelled colloidal rod suspensions [37]. References [1]Löwen H 2001 J. Phys. Condens. Matter 13 R415 [2]Löwen H and Likos C N (ed) 2004 J. Phys. Condens. Matter 16 (special issue) [3]Löwen H 1976 J. Phys. Condens. Matter 20 404201 [4]Guu D, Dhont J K G, Vliegenthart G A and Lettinga M P 2012 J. Phys. Condens. Matter 24 464101 [5]Gupta S, Kundu S, Stellbrink J, Willner L, Allgaier J and Richter D 2012 J. Phys. Condens. Matter 24 464102 [6]Singh S P, Fedosov D A, Chatterji A, Winkler R G, Gompper G 2012 J. Phys. Condens. Matter 24 464103 [7]Laurati M et al 2012 J. Phys. Condens. Matter 24 464104 [8]Harrer C J, Winter D, Horbach J, Fuchs M and Voigtmann T 2012 J. Phys. Condens. Matter 24 464105 [9]De Puit R J and Squires T M 2012 J. Phys. Condens. Matter 24 464106 [10]De Puit R J and Squires T M 2012 J. Phys. Condens. Matter 24 464107 [11]Contreras-Aburto C and Nägele G 2012 J. Phys. Condens. Matter 24 464108 [12]Palberg T, Köller T, Sieber B, Schweinfurth H, Reiber H and Nägele G 2012 J. Phys. Condens. Matter 24 464109 [13]Papadopoulos P, Deng X and Vollmer D 2012 J. Phys. Condens. Matter 24 464110 [14]Schmitz R and Dünweg B 2012 J. Phys. Condens. Matter 24 464111 [15]Zhou J and Schmid F 2012 J. Phys. Condens. Matter 24 464112 [16]Smallenburg F, Vutukuri H R, Imhof A, van Blaaderen A and Dijkstra M 2012 J. Phys. Condens. Matter 24 464113 [17]Vissers T, Wysocki A, Rex M, Löwen H, Royall C P, Imhof A and van Blaaderen A 2011 Soft Matter 7 2352 [18]Glanz T and Löwen H 2012 J. Phys. Condens. Matter 24 464114 [19]Kohl M, Ivlev A, Brand P, Morfill G E and Löwen H 2012 J. Phys. Condens. Matter 24 464115 [20]Hanes R D L and Egelhaaf S U 2012 J. Phys. Condens. Matter 24 464116 [21]Mazilu M, Rudhall A, Wright E M and Dholakia K 2012 J. Phys. Condens. Matter 24 464117 [22]Dillmann P, Maret G and Keim P 2012 J. Phys. Condens. Matter 24 464118 [23]Wilms D et al 2012 J. Phys. Condens. Matter 24 464119 [24]Kreuter C, Siems U, Henseler P, Nielaba P, Leiderer P and Erbe A 2012 J. Phys. Condens. Matter 24 464120 [25]Malijevsky A and Jackson G 2012 J. Phys. Condens. Matter 24 464121 [26]Statt A, Winkler A, Virnau P and Binder K 2012 J. Phys. Condens. Matter 24 464122 [27]Oğuz E C, Löwen H, Reinmüller A, Schöpe H J, Palberg T and Messina R 2012 J. Phys. Condens. Matter 24 464123 [28]Oettel M 2012 J. Phys. Condens. Matter 24 464124 [29]Zeng Y and van Klitzing R 2012 J. Phys. Condens. Matter 24 464125 [30]Bonilla-Capilla B, Ramirez-Saito A, Ojeda-Lopez M A and Arauz-Lara J L 2012 J. Phys. Condens. Matter 24 464126 [31]Leferink op Reinink A B G M, van den Pol E, Byelov D V, Petukhov A V and Vroege G J 2012 J. Phys. Condens. Matter 24 464127 [32]Taylor S L, Evans R and Royall C P 2012 J. Phys. Condens. Matter 24 464128 [33]Toner J, Tu Y H and Ramaswamy S 2012 J. Phys. Condens. Matter 24 464110 [34]Schmitz R and Dünweg B 2005 J. Phys. Condens. Matter 318 170 [35]Cates M E 2012 Rep. Prog. Phys. 75 042601 [36]Tarama M and Ohta T 2012 J. Phys. Condens. Matter 24 464129 [37]Wensink H H and Löwen H 2012 J. Phys. Condens. Matter 24 464130 Colloidal dispersions in external fields contents Colloidal dispersions in external fieldsHartmut Löwen Depletion induced clustering in mixtures of colloidal spheres and fd-virusD Guu, J K G Dhont, G A Vliegenthart and M P Lettinga Advanced rheological characterization of soft colloidal model systemsS Gupta, S K Kundu, J Stellbrink, L Willner, J Allgaier and D Richter Conformational and dynamical properties of ultra-soft colloids in semi-dilute solutions under shear flowSunil P Singh, Dmitry A Fedosov, Apratim Chatterji, Roland G Winkler and Gerhard Gompper Transient dynamics in dense colloidal suspensions under shear: shear rate dependenceM Laurati, K J Mutch, N Koumakis, J Zausch, C P Amann, A B Schofield, G Petekidis, J F Brady, J Horbach, M Fuchs and S U Egelhaaf Force-induced diffusion in microrheologyCh J Harrer, D Winter, J Horbach, M Fuchs and Th Voigtmann Micro-macro-discrepancies in nonlinear microrheology: I. Quantifying mechanisms in a suspension of Brownian ellipsoidsRyan J DePuit and Todd M Squires Micro-macro discrepancies in nonlinear microrheology: II. Effect of probe shapeRyan J DePuit and Todd M Squires Viscosity of electrolyte solutions: a mode-coupling theoryClaudio Contreras-Aburto and Gerhard Nägele Electro-kinetics of charged-sphere suspensions explored by integral low-angle super-heterodyne laser Doppler velocimetryThomas Palberg, Tetyana Köller, Bastian Sieber, Holger Schweinfurth, Holger Reiber and Gerhard Nägele Electrokinetics on superhydrophobic surfacesPeriklis Papadopoulos, Xu Deng, Doris Vollmer and Hans-Jürgen Butt Numerical electrokineticsR Schmitz and B Dünweg Dielectric response of nanoscopic spherical colloids in alternating electric fields: a dissipative particle dynamics simulationJiajia Zhou and Friederike Schmid Self-assembly of colloidal particles into strings in a homogeneous external electric or magnetic fieldFrank Smallenburg, Hanumantha Rao Vutukuri, Arnout Imhof, Alfons van Blaaderen and Marjolein Dijkstra The nature of the laning transition in two dimensionsT Glanz and H Löwen Microscopic theory for anisotropic pair correlations in driven binary mixturesMatthias Kohl, Alexei V Ivlev, Philip Brandt, Gregor E Morfill and Hartmut Löwen Dynamics of individual colloidal particles in one-dimensional random potentials: a simulation studyRichard D L Hanes and Stefan U Egelhaaf An interacting dipole model to explore broadband transverse optical bindingMichael Mazilu, Andrew Rudhall, Ewan M Wright and Kishan Dholakia Comparison of 2D melting criteria in a colloidal systemPatrick Dillmann, Georg Maret and Peter Keim Effects of confinement and external fields on structure and transport in colloidal dispersions in reduced dimensionalityD Wilms, S Deutschländer, U Siems, K Franzrahe, P Henseler, P Keim, N Schwierz, P Virnau, K Binder, G Maret and P Nielaba Stochastic transport of particles across single barriersChristian Kreuter, Ullrich Siems, Peter Henseler, Peter Nielaba, Paul Leiderer and Artur Erbe A perspective on the interfacial properties of nanoscopic liquid dropsAlexandr Malijevský and George Jackson Controlling the wetting properties of the Asakura-Oosawa model and applications to spherical confinementA Statt, A Winkler, P Virnau and K Binder Crystalline multilayers of charged colloids in soft confinement: experiment versus theoryE C Oğuz, A Reinmüller, H J Schöpe, T Palberg, R Messina and H Löwen Mode expansion for the density profiles of crystal-fluid interfaces: hard spheres as a test caseM Oettel Scaling of layer spacing of charged particles under slit-pore confinement: an effect of concentration or of effective particle diameter?Yan Zeng and Regine von Klitzing Hydrodynamic interactions between colloidal particles in a planar poreB Bonilla-Capilla, A Ramírez-Saito, M A Ojeda-López and J L Arauz-Lara Ageing in a system of polydisperse goethite boardlike particles showing rich phase behaviourA B G M Leferink op Reinink, E van den Pol, D V Byelov, A V Petukhov and G J Vroege Temperature as an external field for colloid-polymer mixtures: 'quenching' by heating and 'melting' by coolingShelley L Taylor, Robert Evans and C Patrick Royall Spinning motion of a deformable self-propelled particle in two dimensionsMitsusuke Tarama and Takao Ohta Emergent states in dense systems of active rods: from swarming to turbulenceH H Wensink and H Löwen

  2. Stochastic hyperfine interactions modeling library

    NASA Astrophysics Data System (ADS)

    Zacate, Matthew O.; Evenson, William E.

    2011-04-01

    The stochastic hyperfine interactions modeling library (SHIML) provides a set of routines to assist in the development and application of stochastic models of hyperfine interactions. The library provides routines written in the C programming language that (1) read a text description of a model for fluctuating hyperfine fields, (2) set up the Blume matrix, upon which the evolution operator of the system depends, and (3) find the eigenvalues and eigenvectors of the Blume matrix so that theoretical spectra of experimental techniques that measure hyperfine interactions can be calculated. The optimized vector and matrix operations of the BLAS and LAPACK libraries are utilized; however, there was a need to develop supplementary code to find an orthonormal set of (left and right) eigenvectors of complex, non-Hermitian matrices. In addition, example code is provided to illustrate the use of SHIML to generate perturbed angular correlation spectra for the special case of polycrystalline samples when anisotropy terms of higher order than A can be neglected. Program summaryProgram title: SHIML Catalogue identifier: AEIF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL 3 No. of lines in distributed program, including test data, etc.: 8224 No. of bytes in distributed program, including test data, etc.: 312 348 Distribution format: tar.gz Programming language: C Computer: Any Operating system: LINUX, OS X RAM: Varies Classification: 7.4 External routines: TAPP [1], BLAS [2], a C-interface to BLAS [3], and LAPACK [4] Nature of problem: In condensed matter systems, hyperfine methods such as nuclear magnetic resonance (NMR), Mössbauer effect (ME), muon spin rotation (μSR), and perturbed angular correlation spectroscopy (PAC) measure electronic and magnetic structure within Angstroms of nuclear probes through the hyperfine interaction. When interactions fluctuate at rates comparable to the time scale of a hyperfine method, there is a loss in signal coherence, and spectra are damped. The degree of damping can be used to determine fluctuation rates, provided that theoretical expressions for spectra can be derived for relevant physical models of the fluctuations. SHIML provides routines to help researchers quickly develop code to incorporate stochastic models of fluctuating hyperfine interactions in calculations of hyperfine spectra. Solution method: Calculations are based on the method for modeling stochastic hyperfine interactions for PAC by Winkler and Gerdau [5]. The method is extended to include other hyperfine methods following the work of Dattagupta [6]. The code provides routines for reading model information from text files, allowing researchers to develop new models quickly without the need to modify computer code for each new model to be considered. Restrictions: In the present version of the code, only methods that measure the hyperfine interaction on one probe spin state, such as PAC, μSR, and NMR, are supported. Running time: Varies

  3. ESA's Integral solves thirty-year old gamma-ray mystery

    NASA Astrophysics Data System (ADS)

    Integral solves mystery hi-res Size hi-res: 60 kb Credits: Credit: ESA, F. Lebrun (CEA-Saclay). ESA's Integral solves thirty-year old gamma-ray mystery The central regions of our galaxy, the Milky Way, as seen by Integral in gamma rays. With its superior ability to see faint details, Integral correctly reveals the individual sources that comprised the foggy, gamma-ray background seen by previous observatories. The brightest 91 objects seen in this image were classified by Integral as individual sources, while the others appear too faint to be properly characterized at this stage. During the spring and autumn of 2003, Integral observed the central regions of our Galaxy, collecting some of the perpetual glow of diffuse low-energy gamma rays that bathe the entire Galaxy. These gamma rays were first discovered in the mid-1970s by high-flying balloon-borne experiments. Astronomers refer to them as the 'soft' Galactic gamma-ray background, with energies similar to those used in medical X-ray equipment. Initially, astronomers believed that the glow was caused by interactions involving the atoms of the gas that pervades the Galaxy. Whilst this theory could explain the diffuse nature of the emission, since the gas is ubiquitous, it failed to match the observed power of the gamma rays. The gamma rays produced by the proposed mechanisms would be much weaker than those observed. The mystery has remained unanswered for decades. Now Integral's superb gamma-ray telescope IBIS, built for ESA by an international consortium led by Principal Investigator Pietro Ubertini (IAS/CNR, Rome, Italy), has seen clearly that, instead of a fog produced by the interstellar medium, most of the gamma-rays are coming from individual celestial objects. In the view of previous, less sensitive instruments, these objects appeared to merge together. In a paper published today in "Nature", Francois Lebrun (CEA Saclay, Gif sur Yvette, France) and his collaborators report the discovery of 91 gamma-ray sources towards the direction of the Galactic centre. Lebrun's team includes Ubertini and seventeen other European scientists with long-standing experience in high-energy astrophysics. Much to the team's surprise, almost half of these sources do not fall in any class of known gamma-ray objects. They probably represent a new population of gamma-ray emitters. The first clues about a new class of gamma-ray objects came last October, when Integral discovered an intriguing gamma-ray source, known as IGRJ16318-4848. The data from Integral and ESA's other high-energy observatory XMM-Newton suggested that this object is a binary system, probably including a black hole or neutron star, embedded in a thick cocoon of cold gas and dust. When gas from the companion star is accelerated and swallowed by the black hole, energy is released at all wavelengths, mostly in the gamma rays. However, Lebrun is cautious to draw premature conclusions about the sources detected in the Galactic centre. Other interpretations are also possible that do not involve black holes. For instance, these objects could be the remains of exploded stars that are being energised by rapidly rotating celestial 'powerhouses', known as pulsars. Observations with another Integral instrument (SPI, the Spectrometer on Integral) could provide Lebrun and his team with more information on the nature of these sources. SPI measures the energy of incoming gamma rays with extraordinary accuracy and allows scientist to gain a better understanding of the physical mechanisms that generate them. However, regardless of the precise nature of these gamma-ray sources, Integral's observations have convincingly shown that the energy output from these new objects accounts for almost ninety per cent of the soft gamma-ray background coming from the centre of the Galaxy. This result raises the tantalising possibility that objects of this type hide everywhere in the Galaxy, not just in its centre. Again, Lebrun is cautious, saying, "It is tempting to think that we can simply extrapolate our results to the entire Galaxy. However, we have only looked towards its centre and that is a peculiar place compared to the rest." Next on Integral's list of things to do is to extend this work to the rest of the Galaxy. Christoph Winkler, ESA's Integral Project Scientist, says, "We now have to work on the whole disc region of the Galaxy. This will be a tough and long job for Integral. But at the end, the reward will be an exhaustive inventory of the most energetic celestial objects in the Galaxy." Note to editors The paper explaining these results will appear on the 18 March 2004 issue of "Nature". The author list includes F. Lebrun, R. Terrier, A. Bazzano, G. Belanger, A. Bird, L. Bouchet, A. Dean, M. Del Santo, A. Goldwurm, N. Lund, H. Morand, A. Parmar, J. Paul, J.-P. Roques, V. Schoenfelder, A. Strong, P. Ubertini, R. Walter and C. Winkler. For information about the related INTEGRAL and XMM-Newton discovery of IGRJ16318-4848, see: http://www.esa.int/esaSC/Pr_21_2003_s_en.html Integral The International Gamma Ray Astrophysics Laboratory (Integral) is the first space observatory that can simultaneously observe celestial objects in gamma rays, X-rays and visible light. Integral was launched on a Russian Proton rocket on 17 October 2002 into a highly elliptical orbit around Earth. Its principal targets include regions of the galaxy where chemical elements are being produced and compact objects, such as black holes. IBIS, Imager on Board the Integral Satellite - IBIS provides sharper gamma-ray images than any previous gamma-ray instrument. It can locate sources to a precision of 30 arcseconds, the equivalent of measuring the height of a person standing in a crowd, 1.3 kilometres away. The Principal Investigators that built the instrument are P. Ubertini (IAS/CNR, Rome, Italy), F. Lebrun (CEA Saclay, Gif sur Yvette, France), G. Di Cocco (ITESRE, Bologna, Italy). IBIS is equipped with the first un-cooled semiconductor gamma-ray camera, called ISGRI, which is responsible for its outstanding sensitivity. ISGRI was developed and built for ESA by CEA Saclay, France. SPI, Spectrometer on Integral - SPI measures the energy of incoming gamma rays with extraordinary accuracy. It is more sensitive to faint radiation than any previous gamma ray instrument and allows the precise nature of gamma ray sources to be determined. The Principal Investigators that developed SPI are J.-P. Roques, (CESR, Toulouse, France) and V. Schoenfelder (MPE, Garching, Germany). XMM-Newton XMM-Newton can detect more X-ray sources than any previous observatory and is helping to solve many cosmic mysteries of the violent Universe, from black holes to the formation of galaxies. It was launched on 10 December 1999, using an Ariane-5 rocket from French Guiana. Its orbit takes it almost a third of the way to the Moon, so that astronomers can enjoy long, uninterrupted views of celestial objects.

  4. The Santander Atlantic Time-Series Station (SATS): A Time Series combination of a monthly hydrographic Station and The Biscay AGL Oceanic Observatory.

    NASA Astrophysics Data System (ADS)

    Lavin, Alicia; Somavilla, Raquel; Cano, Daniel; Rodriguez, Carmen; Gonzalez-Pola, Cesar; Viloria, Amaia; Tel, Elena; Ruiz-Villareal, Manuel

    2017-04-01

    Long-Term Time Series Stations have been developed in order to document seasonal to decadal scale variations in key physical and biogeochemical parameters. Long-term time series measurements are crucial for determining the physical and biological mechanisms controlling the system. The Science and Technology Ministers of the G7 in their Tsukuba Communiqué have stated that 'many parts of the ocean interior are not sufficiently observed' and that 'it is crucial to develop far stronger scientific knowledge necessary to assess the ongoing changes in the ocean and their impact on economies.' Time series has been classically obtained by oceanographic ships that regularly cover standard sections and stations. From 1991, shelf and slope waters of the Southern Bay of Biscay are regularly sampled in a monthly hydrographic line north of Santander to a depth of 1000 m in early stages and for the whole water column down to 2580 m in recent times. Nearby, in June 2007, the IEO deployed an oceanic-meteorological buoy (AGL Buoy, 43° 50.67'N; 3° 46.20'W, and 40 km offshore, www.boya-agl.st.ieo.es). The Santander Atlantic Time Series Station is integrated in the Spanish Institute of Oceanography Observing Sistem (IEOOS). The long-term hydrographic monitoring has allowed to define the seasonality of the main oceanographic facts as the upwelling, the Iberian Poleward Current, low salinity incursions, trends and interannual variability at mixing layer, and at the main water masses North Atlantic Central Water and Mediterranean Water. The relation of these changes with the high frequency surface conditions recorded by the Biscay AGL has been examined using also satellite and reanalysis data. During the FIXO3 Project (Fixed-point Open Ocean Observatories), and using this combined sources, some products and quality controled series of high interest and utility for scientific purposes has been developed. Hourly products as Sea Surface Temperature and Salinity anomalies, wave significant height character with respect to monthly average, and currents with respect to seasonal averages. Ocean-atmosphere heat fluxes (latent and sensible) are computed from the buoy atmospheric and oceanic measurements. Estimations of the mixed layer depth and bulk series at different water levels are provided in a monthly basis. Quality controlled series are distributed for sea surface salinity, oxygen and chlorophyll data. Some sensors are particularly affected by biofouling, and monthly visits to the buoy permit to follow these sensors behaviour. Chlorophyll-fluorescence sensor is the main concern, but Dissolved Oxygen sensor is also problematic. Periods of realistic smooth variations present strong offset that is corrected based on the Winkler analysis of water samples. Also Wind air temperature and humidilty buoy sensors are monthly compared with the research vessel data. Next step will consist in working on a better validation of the data, mainly ten-year data from the Biscay AGL buoy, but also the 25 year data of the station 7, close to the buoy. Data will be depurated an analyzed and the final product will be published and widening to improve and get the better use of them.

  5. Long-term impact of bottom trawling on pelagic-benthic coupling in the southern North Sea (German Bight)

    NASA Astrophysics Data System (ADS)

    Friedrich, Jana; van Beusekom, Justus E. E.; Neumann, Andreas; Naderipour, Celine; Janssen, Felix; Ahmerkamp, Soeren; Holtappels, Moritz; Schueckel, Ulrike

    2016-04-01

    The southern North Sea, and the German Bight, has been systematically bottom-trawled at least since the late 19th century (Christiansen, 2009; Reiss et al., 2009; Kröncke 2011; Emeis et al., 2015, Neumann et al., 2016). As a result, benthic habitats and benthic biogenic structures created by bivalves, polychaetes and hydroids where destroyed or reduced. The parallel removal of hard substrate (gravel and boulders) avoids the resettlement of hard-substrate depended species. For example, the Oyster ground, a huge oyster bank a hundred years ago (Olsen, 1883), turned into a muddy depression today. In addition, shallow depth of max 40 m, strong tidal currents and frequent storms result in a high-energy environment with low sedimentation rates and recurrent sediment resuspension. The decrease in benthic filtering capacity by disturbance in epifauna and bottom roughness (Callaway et al., 2007) apparently influence pelagic-benthic coupling of biogeochemical fluxes. Heip et al. (1995) indicate that benthic respiration at depths prevailing in the German Bight accounts for 10-40% of total respiration, whereas pelagic respiration accounts for 60-90%. Previous estimates are in the middle of this range (Heip et al., 1995). To test these hypotheses and to assess the partitioning of benthic and pelagic processes, and the factors influencing organic matter mineralization, we measured pelagic production and respiration based on Winkler titration, in-situ benthic fluxes using chamber landers, we did ex-situ incubations of intact sediment cores and analysed still images from a towed benthic video sled. In addition, O2 fluxes in permeable sediments were estimated by integrating the volumetric rate measurements of the upper sediment layer over in-situ microsensor-measured O2 penetration depth. Our current results show significant seasonality in benthic respiration, with highest rates in summer and lowest rates in winter. No significant differences in total benthic respiration rates were measured on sandy (permeable) and silty (diffusive) sediments, whereas significant differences of microbial O2 uptake were observed indeed between permeable and diffusive sediments. Nevertheless, when considering the multitude of different methods, we found that benthic respiration over the season seemed to be governed mainly by settling of fresh organic matter during calm periods and its rapid turnover in a region where strong tidal and wind-forced currents distribute suspended particles over large areas. Summer pelagic respiration rates were an order of magnitude higher then benthic rates, and account for 88-93% of total respiration, which represents 79-98% of pelagic primary production. Our measurements of benthic respiration account for 7-12% of the total in the German Bight, which is lower compared to earlier studies. Strong tidal and wind-forced currents along with the lack of complex three-dimensional biogenic structures seem to prevent settling of suspended matter and foster resuspension, thereby supporting pelagic turnover processes. Hence, we assume that benthic turnover might have been higher before systematic bottom trawling destroyed the bottom hydrobiological regime. Today, due to the strong current regime in the German Bight, the pelagic system appears to be a largely closed system of production and respiration, with comparably little for export to the benthic system due to absence of biogenic structures. References Callaway R, Engelhard GH, Dann J, Cotter J, Rumohr H (2007) One century of North Sea epibenthos and fishing: comparison between 1902-1912, 1982-1985 and 2000. Marine Ecology Progress Series 346, 27-43. Christiansen S (2009) Towards good environmental status - A network of marine protected areas for the North Sea. In: Lutter S (ed) WWF Germany, Fankfurt/Main Emeis K-C, van Beusekom J, Callies U, Ebinghaus R, Kannen A, Kraus G, Kröncke I, Lenhart H, Lorkowski I, Matthias V, Möllmann C, Pätsch J, Scharfe M, Thomas H, Weisse R, Zorita E (2015) The North Sea -A shelf sea in the Anthropocene. Journal of Marine Systems 141:18-33 Heip CHR, Goosen NK, Herman PMJ, Kromkamp J, Middelburg JJ, Soetaerd K (1995) Production and consumption of biological particles in temperate tidal estuaries. Oceanography and Marine Biology: an Annual Review 33:1-149 Kröncke I (2011) Changes in Dogger Bank macrofauna communities in the 20th century caused by fishing and climate. Estuarine, Coastal and Shelf Sciences 94 (3): 234-245. Neumann H, Diekmann R, Kröncke I (2016) The influence of habitat characteristics and fishing effort on functional composition of epifauna in the south-eastern North Sea. Estuarine, Coastal and Shelf Sciences 169: 182-194. Olsen OT (1883) The Piscatorial Atlas of the North Sea, English and St. George's Channels. Taylor and Francis, London Reiss H, Greenstreet S, Sieben K, Ehrich S, Piet G, Quirijns F, Robinson F, Wolff W, Kröncke I (2009) Effects of fishing disturbance on benthic communities and secondary production within an intensively fished area. Marine Ecology Progress Series 394:201-213

  6. Mixing and turbulent mixing in fluids, plasma and materials: summary of works presented at the 3rd International Conference on Turbulent Mixing and Beyond

    NASA Astrophysics Data System (ADS)

    Gauthier, Serge; Keane, Christopher J.; Niemela, Joseph J.; Abarzhi, Snezhana I.

    2013-07-01

    Mixing and turbulent mixing are non-equilibrium processes that occur in a broad variety of processes in fluids, plasmas and materials. The processes can be natural or artificial, their characteristic scales can be astrophysical or atomistic, and energy densities can be low or high. Understanding the fundamental aspects of turbulent mixing is necessary to comprehend the dynamics of supernovae and accretion discs, stellar non-Boussinesq and magneto-convection, mantle-lithosphere tectonics and volcanic eruptions, atmospheric and oceanographic flows in geophysics, and premixed and non-premixed combustion. It is crucial for the development of the methods of control in technological applications, including mixing mitigation in inertial confinement and magnetic fusion, and mixing enhancement in reactive flows, as well as material transformation under the action of high strain rates. It can improve our knowledge of realistic turbulent processes at low energy density involving walls, unsteady transport, interfaces and vortices, as well as high energy density hydrodynamics including strong shocks, explosions, blast waves and supersonic flows. A deep understanding of mixing and turbulent mixing requires one to go above and beyond canonical approaches and demands further enhancements in the quality and information capacity of experimental and numerical data sets, and in the methods of theoretical analysis of continuous dynamics and kinetics. This has the added potential then of bringing the experiment, numerical modelling, theoretical analysis and data processing to a new level of standards. At the same time, mixing and turbulent mixing being one of the most formidable and multi-faceted problems of modern physics and mathematics, is well open for a curious mind. In this article we briefly review various aspects of turbulent mixing, and present a summary of over 70 papers that were discussed at the third International Conference on 'Turbulent Mixing and Beyond', TMB-2011, that was held in the summer of 2011 at the Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy. The papers are arranged by TMB themes, and within each theme they are ordered alphabetically by the last name of the first author. The collection includes regular research papers, a few research briefs and review papers. The review papers are published as 'Comments' articles in Physica Scripta . Canonical turbulence and turbulent mixing. Six papers are devoted to canonical turbulence and turbulent mixing. Baumert presents a theory of shear-generated turbulence, which is based on a two-fluid concept. Gampert et al investigate the problem of adequate representation of turbulent structures by applying a decomposition of the field of the turbulent kinetic energy into regions of compressive and extensive strain. Paul and Narashima consider the dynamics of a temporal mixing layer using a vortex sheet model. Schaefer et al analyse the joint statistics and conditional mean strain rates of streamline segments in turbulent flows. Sirota and Zybin deepen their discussion of the connection between Lagrangian and Eulerian velocity structure functions in hydrodynamic turbulence. Talbot et al investigate the heterogeneous mixing by considering gases of very nearly equal densities and very different viscosities. Wall-bounded flows. Three papers are dedicated to wall-bounded flows. Mok et al use the Bayesian spectral density approach to identify the dominant free surface fluctuation frequency downstream of an oscillating hydraulic jump. Tejada-Martinez et al employ large eddy numerical simulations to study wind-driven shallow water flows with and without full-depth Langmuir circulation (parallel counter rotating vortices). Wu et al re-evaluate the Karman constant based on a multi-layer analytical theory of Prandtl's mixing length function. Non-equilibrium processes. This theme is represented by two papers. Chasheckhin and Zagumennyi consider non-equilibrium processes in non-homogeneous fluids under the action of external forces. Whitehurst et al report a state-of-the-art study of on plasma filaments and geomagnetic field fluctuations that can concomitantly by solar powered microwave transmissions. Interfacial dynamics . Five works are dedicated to the theme of interfacial dynamics, and one of its key topics—interfacial Rayleigh-Taylor (RT) and Richtmyer-Meshkov (RM) instabilities. Gauthier models the evolution of RT instability in stratified fluids by means of numerical simulations employing a self-adaptive multi-domain spectral method. Matsuoka studies three-dimensional (3D) vortex sheet motion with axial symmetry in incompressible RM and RT instabilities and shows that an azimuthal motion exists in 3D inhomogeneous flows (density stratification) with axial symmetry and without swirl. McFarland et al also investigate the influence of the initial perturbation amplitude for the inclined interface RM instability with an arbitrary Lagrangian-Eulerian hydrodynamic code and emphasize on nonlinear acoustic effects. Pavlenko et al report experimental studies describing the gas-bubble evolution and stability of a gas-bubble interface under the influence of variable pressure field. Tritschler et al report state-of-the-art simulations of a single-mode RM instability employing the central-upwind sixth-order weighted essentially non-oscillatory (WENO) scheme. High energy density physics. Two research papers represent this theme. The research paper by Fryxell et al reports on an integrated experimental and numerical study of radiative shocks in high energy density plasmas, when energy transfer by radiation is large enough to modify the structure of the shock with experiments and numerical simulations. The research paper by Wang et al reports numerical investigations of ablative RT instability in the presence of preheating, that is known to play an important role in inertial confinement fusion. Material science. Three papers are devoted to physical and numerical experiments in material science. Demianov et al carry out turbulent mixing RT simulations with non-Newtonian fluids. Winkler and Abel carry out thermal convection experiments on very thin freestanding films, where turbulent mixing extends nearly to nano-scales. The work of Winkler and Abel was recognized at TMB-2011 with the Best Poster Award issued by Physica Scripta . Ziaei-Rad numerically investigates pressure drop and heat transfer in laminar and turbulent nano-fluid flow consisting of Al2O3 and water. Astrophysics. In their state-of-the-art simulations, Endeve et al present a study of turbulence and magnetic field amplification from spiral stationary accretion shock instability in core-collapse supernovae. Gibson questions if turbulence and fossil turbulence may lead to life in the Universe and puts forward a set of arguments to support this point of view. Magneto-hydrodynamics. Two research papers are particularly devoted to magneto-hydrodynamics (MHD). Karelsky et al study nonlinear dynamics of MHD in the shallow water approximation over an arbitrary surface within the Riemann invariant form. Kitiashvili et al report on turbulent properties of the 'Quiet Sun' by comparing kinetic energy spectra that are obtained from infrared TiO observations with the New Solar Telescope with 3D radiative MHD numerical simulations employing the state-of-the-art 'SolarBox' code. Canonical plasmas. Four papers are devoted to canonical plasmas. Baryshnikov et al investigate the influence of dust concentration on shock wave splitting in discharge plasmas in different gases. Kemel et al use direct numerical simulations and mean-field simulations to investigate the effects of non-uniformity of the magnetic field on the suppression of the turbulent pressure, which tends to make the mean magnetic field more non-uniform. Pradipta and Lee investigate, by means of experiments and theoretical analysis, the acoustic gravity waves created by anomalous heat sources. In a companion paper Rooker et al provide a very interesting study on the generation and detection of 'whistler waves' induced space plasma turbulence at Gakona (Alaska). Physics of atmosphere. Five papers are devoted to the physics of atmosphere. Byalko presents the first experimental observation of a new hydrodynamic phenomenon, the underwater tornado. Herring and Kimura provide a review on recent results on homogeneous stably stratified turbulence. Pouquet et al use a high-resolution direct numerical simulation of rotating helical turbulence to obtain new numerical results on the inverse energy cascade in rotating flows. Tailleux discusses energy conversion and dissipation in depth in mixing flows. Zagumennyi and Chashechkin study the structure of convective flows driven by density variations in a stratified fluid by means of experiments and numerical simulations. Geophysics and Earth science. Three papers are dedicated to geophysics and Earth science. Jinadasa et al investigate small-scale and lateral intermittency of oceanic microstructure in the pycnocline. Shrira and Townsend review on a plausible mechanism of deep-ocean mixing caused by near-inertial waves in the abyssal ocean. Using numerical simulations, Imazio and Mininni study how helicity affects the spectrum of a passive scalar in rotating turbulent flows. Combustion. Two papers deal with flows with chemical reactions. Meshram used the Lewis-Kraichnan space-time version of Hopf's functional formalism to investigate turbulence with chemical reaction. Watanabe et al carry out experiments on a turbulent plane liquid jet with a second-order chemical reaction. Theoretical aspects of non-equilibrium dynamics. Six papers are devoted to fundamental aspects of non-equilibrium dynamics. Chen et al present state-of-the-art work on exact and direct derivation of macroscopic theoretical description for a flow at arbitrary Knudsen number from the Boltzmann-Bhatnagar-Gross-Krook kinetic theory with constant relaxation time. Chernyak et al consider compressible gas flows in a gravity field above a homogeneous surface in a shallow water approximation within the Riemann invariants form. Fukumoto and Mie develop a weakly nonlinear stability theory for a rotating flow confined in a cylinder of elliptic cross-section. Karelsky and Petrosyan further expand the use of the shallow-water approximation and Riemann invariants to study the problem of a steady-state flow over a step. Meshram and Sahu employ the Lewis-Kraichnan space-time version of the Hopf functional formalism to investigate MHD turbulence. Nepomnyashchy and Volpert study particle growth due to sub-diffusion (described by an equation with fractional derivatives) of a dissolved component. Stochastic processes and probabilistic description. Two research papers are dedicated to this theme. Abarzhi et al present a stochastic model of statistically unsteady RT mixing with uniform and non-uniform accelerations. Within the framework of non-equilibrium thermodynamics, Klimenko considers the combustion problem and interprets it as a competitive mixing. Advanced numerical methods. Seven research papers are dedicated to advanced numerical methods and numerical simulations. Denisenko and Oparina study the stability of the laminar flow between two rotating cylinders (the Taylor-Couette flow) by means of numerical simulations based on the compressible inviscid Euler equations. Fortova investigates spectral characteristics of the vortex cascades in a shear flow. Ghods and Herrmann present a consistent rescaled momentum transport method for simulating large density ratio incompressible multiphase flows using the level set methods. Kaman et al provide an overview on recent progress in turbulent mixing. Koppula et al report the development of a universal realizable anisotropic and pre-stress closure model and illustrate the model application in shear flows. Kozlov and Eriklintsev carry out numerical simulation of countercurrent flow and diffusion processes in a separating gas centrifuge. Ničeno et al provide simulations of single-phase mixing in fuel rod bundles using an immersed boundary method. Experiments and experimental diagnostics. Nine papers represent the theme of experimental diagnostics of non-equilibrium dynamics. Bewley and Vollmer outline the use of nano-scale hydrogen particles in super-fluid helium for visualization of the attraction of hydrogen to quantized vortex cores. Bewley's contribution to the field of 'Turbulent Mixing and Beyond' was recognized at TMB-2011 with the Young Scientists Award. Fiabane et al investigate the possible clustering of particles whose diameter is larger than the dissipation scale of the carrier flow. Kuchibhatla and Ranjan investigate the effect of the initial conditions on RT instabilities in fluid with similar densities at the experiments conducted at water channel facility. Meshkov and Sirotkin present a new experimental methodology for controlling the processes of formation and attenuation of a vortex in a tub. Niemela discusses the advanced diagnostic techniques and the static and dynamic measurements of the Nusselt number in turbulent convection using propagation and detection of heat waves. In a series of two papers, Pavlenko et al report on experiments on gas-bubbles. The apparatus is carefully tested and used to obtain experimental data on gas-bubble compression and gas-bubbles floating up in liquid. Suzuki et al investigate high-Schmidt number scalar mixing in fractal-generated turbulence by means of the planar laser-induced flourescence (PLIF) technique. Zimmermann et al detail a novel measurement technique suitable for opaque or granular flows, which is based on an instrumented particle with wireless data transmission. Zimmermann's contribution to the field of 'Turbulent Mixing and Beyond' was recognized at TMB-2011 with the Young Scientists Award. Seven review papers were published as 'Comments' articles in Physica Scripta . These are the review of Beresnyak on 'Universal magnetohydrodynamic turbulence and small-scale dynamo', Blackman on 'Accretion disks and dynamos: toward a unified mean field theory', Frederiksen et al on 'Stochastic subgrid parameterizations for atmospheric and oceanic flows', Grinstein et al 'On coarse-grained simulations of turbulent material mixing', Klimenko on 'Mixing, entropy and competition', Smalyuk on 'Experimental techniques for measuring Rayleigh-Taylor instability in inertial confinement fusion', and Sugiyama on 'Intrinsic stochasticity in fusion plasmas'. In particular, Beresnyak reviews the universal magneto-hydrodynamic scenario and small-scale dynamo (including the measurement of its efficiency), Kolmogorov constant and anisotropy constant in high-resolution direct numerical simulations. Blackman discusses recent developments in the theory of accretion disks and dynamos, and proposes a potential path toward a unified mean field theory of these astrophysical phenomena. Frederiksen et al discusses novel approaches to stochastic sub-grid parameterizations for atmospheric and oceanic flows. Grinstein et al discuss numerical approaches for turbulent material mixing that employ coarse-grained simulations. Klimenko presents a new general framework for studies of competitive mixing and non-traditional thermodynamics that can be applied to random behavior associated with turbulence, mixing and competition. Smalyuk discusses the advancements in experimental diagnostics of Rayleigh-Taylor instability in inertial confinement fusion. Sugiyama reviews magnetic fusion and discusses stochastic processes and intrinsic stochasticity in fusion plasmas. Conclusion . In conclusion, the authors hope that this new Topical Issue will continue to serve for exposure of the state-of-the-art in recent theoretical, experimental and numerical developments in 'Turbulent Mixing and Beyond' phenomena to a broad scientific community, for integration of our knowledge of the subject and for further enrichment of its development.

  7. The compression mechanism of garnets based on in situ observations

    NASA Astrophysics Data System (ADS)

    Dymshits, Anna; Sharygin, Igor; Litasov, Konstantin; Shatskiy, Anton

    2014-05-01

    Previously it was showed that the bulk modulus of garnet is strongly affected by the bulk modulus of the dodecahedra, while compressibility of other individual polyhedra displays no correlation with the compressibility of the structure as a whole (Milman et al., 2001). If so, Na-majorite (Na-maj) would have the smallest bulk modulus of all silicate garnets, as a phase with a predicted dodecahedral bulk modulus of approximately 70 GPa (Hazen et al., 1994). In fact Na-maj has the largest bulk modulus among the silicate garnets. This behavior must reflect the all-mineral framework of Na-maj with very small cell volume and silicon in the octahedral position. Thus, we conclude that not only the dodecahedral sites, but also the behavior of the garnet framework and relative sizes of the 8- and 6-coordinated cations, control garnet compression. The octahedral site in Na-maj is quite small (1.79 Å) and contains only silicon in comparison to the pyrope (1.85 Å) or majorite (1.88 Å). The small and highly charged octahedra shares four edges with the dodecahedra and thus restrict the volume of the large and low charged dodecahedra. In spite Na-maj has a large average X-cation radius (RNa = 1.07 Å) its dodecahedral volume is relatively small (V = 21.23 and 21.26 Å3). Pacalo et al. (1992) suggested that XO8 polyhedra act as braces and controls the amount of rotation between tetrahedra and octahedra within the corner-linked chains. In case of pyrope XO8 cite is not filled up and polyhedra within the corner-linked chains can rotate freely to accommodate applied stress. In case of Na-maj the dodecahedral site is filled up and rotational freedom is minimized. The dodecahedral site in knorringite (Knr) contains cation with a small radius (Mg-O = 2.22 and 2.34 Å), so XO8 polyhedra is not filled up and can rotate freely to accommodate applied stress. In case of uvarovite not only octahedral but the dodecahedral site is also large (Ca-O = 2.35 and 2.51 Å), so the rotational freedom is minimized and such relations between the XO8 and YO6 sites provide evidence for comparatively more rigid structure. In case of uvarovite the bulk modulus is 162 GPa (Leger et al., 1990), while for Knr we obtain 154 GPa. Such relations between the XO8 and YO6 sites provide evidence for comparatively more rigid structure. As a result, Na-maj with all octahedral sites occupied by silicon has the largest value of the bulk modulus among garnets. It would be interesting to study compressibility of Li-majorite expressed by Yang et al. (2009). That phase has smaller cell volume (1430 Å3) and X-O distance (2.26 Å) but the same YO6 polyhedra fully occupied by silicon. The study was supported by Ministry of Education and Science of Russian Federation, project Nos 14.B25.31.0032, MK-265.2014.5, Russian Foundation for Basic Research No 14-05-00957-a. Hazen, R.M., Downs, R.T., Conrad, P.G., Finger, L.W., Gasparik, T. Comparative compressibilities of majorite-type garnets // Physics and Chemistry of Minerals, 1994, v.21, p.344-349. Leger, J., Redon, A., Chateau, C. Compressions of synthetic pyrope, spessartine and uvarovite garnets up to 25 GPa // Physics and Chemistry of Minerals, 1990, v.17, p.161-167. Milman, V., Akhmatskaya, E., Nobes, R., Winkler, B., Pickard, C., White, J. Systematic ab initio study of the compressibility of silicate garnets // Acta Crystallographica Section B: Structural Science, 2001, v.57, p.163-177. Yang, H., Konzett, J., Frost, D.J., Downs, R.T. X-ray diffraction and Raman spectroscopic study of clinopyroxenes with six-coordinated Si in the Na(Mg0.5Si0.5)Si2O6-NaAlSi2O6 system // American Mineralogist, 2009, v.94, p.942-949.

  8. ESA's new view of the Milky Way - in gamma rays!

    NASA Astrophysics Data System (ADS)

    2003-11-01

    Integral's gamma-ray map of the galaxy hi-res Size hi-res: 430 kb Credits: ESA/SPI team A portion of Integral's gamma-ray map of the galaxy A portion of Integral's gamma-ray map of the galaxy. This false colour picture was taken by the spectrometer on board Integral (SPI) between December 2002 and March 2003. The yellow dots correspond to bright known gamma-rays sources, whilst blue areas indicate regions of low emission. Data similar to these, but in a higher energy range, have been used to study where aluminium and iron are produced in the Galaxy. Since its formation from a cloud of hydrogen and helium gas, around 12 000 million years ago, the Milky Way has gradually been enriched with heavier chemical elements. This has allowed planets and, indeed, life on Earth to form. Today, one of those heavier elements - radioactive aluminium - is spread throughout the Galaxy and, as it decays into magnesium, gives out gamma rays with a wavelength known as the '1809 keV line'. Integral has been mapping this emission with the aim of understanding exactly what is producing all this aluminium. In particular, Integral is looking at the aluminium 'hot spots' that dot the Galaxy to determine whether these are caused by individual celestial objects or the chance alignment of many objects. Astronomers believe that the most likely sources of the aluminium are supernovae (exploding high-mass stars) and, since the decay time of the aluminium is around one million years, Integral's map shows how many stars have died in recent celestial history. Other possible sources of the aluminium include 'red giant' stars or hot blue stars that give out the element naturally. To decide between these options, Integral is also mapping radioactive iron, which is only produced in supernovae. Theories suggest that, during a supernova blast, aluminium and iron should be produced together in the same region of the exploding star. Thus, if the iron's distribution coincides with that of the aluminium, it will prove that the overwhelming majority of aluminium comes indeed from supernovae. These measurements are difficult and have not been possible so far, since the gamma-ray signature of radioactive iron is about six times fainter than that of the aluminium. However, as ESA's powerful Integral observatory accumulates more data in the course of the next year, it will finally be possible to reveal the signature of radioactive iron. This test will tell astronomers whether their theories of how elements form are correct. In addition to these maps, Integral is also looking deeply into the centre of the Galaxy, to make the most detailed map ever of 'antimatter' there. Antimatter is like a mirror image to normal matter and is produced during extremely energetic atomic processes: for example, the radioactive decay of aluminium. Its signature is known as the '511 keV line.' Even though Integral's observations are not yet complete, they show that there is too much antimatter in the centre of the Galaxy to be coming from aluminium decay alone. They also show clearly that there must be many sources of antimatter because it is not concentrated around a single point. There are many possible sources for this antimatter. As well as supernovae, old red stars and hot blue stars, there are jets from neutron stars and black holes, stellar flares, gamma-ray bursts and interaction between cosmic rays and the dusty gas clouds of interstellar space. Chris Winkler, Integral's Project Scientist, says: "We have collected excellent data in the first few months of activity but we can and will do much more in the next year. Integral's accuracy and sensitivity have already exceeded our expectations and, in the months to come, we could get the answers to some of astronomy's most intriguing questions." Note to editors: These and other preliminary results, plus a thorough description of the Integral spacecraft and mission are published this month in a dedicated issue of the journal Astronomy and Astrophysics. At its 105th meeting on 6 October 2003, ESA's Science Programme Committee unanimously decided to extend the Integral mission until December 2008. The International Gamma Ray Astrophysics Laboratory (Integral) is the first space observatory that can simultaneously observe celestial objects in gamma rays, X-rays and visible light. Integral was launched on a Russian Proton rocket on 17 October 2002 into a highly elliptical orbit around Earth. Its principal targets include regions of the galaxy where chemical elements are being produced and compact objects, such as black holes. SPI measures the energy of incoming gamma rays with extraordinary accuracy. It is more sensitive to faint radiation than any previous gamma ray instrument and allows the precise nature of gamma ray sources to be determined. SPI's Principal Investigators are: J.-P. Roques, (CESR Toulouse, France), V. Schönfelder (MPE Garching, Germany).

  9. Aerodynamics and Ecomorphology of Flexible Feathers and Morphing Bird Wings

    NASA Astrophysics Data System (ADS)

    Klaassen van Oorschot, Brett

    Birds are talented fliers capable of vertical take-off and landing, navigating turbulent air, and flying thousands of miles without rest. How is this possible? What allows birds to exploit the aerial environment with such ease? In part, it may be because bird wings are unlike any engineered wing. They are flexible, strong, lightweight, and dynamically capable of changes in shape on a nearly instantaneous basis (Rayner, 1988; Tobalske, 2007). Moreover, much of this change is passive, modulated only by changes in airflow angle and velocity. Birds actively morph their wings and their feathers morph passively in response to airflow to meet aerodynamic demands. Wings are highly adapted to myriad aeroecological factors and aerodynamic conditions (e.g. Lockwood et al., 1998; Bowlin and Winkler, 2004). This dissertation contains the results of my research on the complexities of morphing avian wings and feathers. I chose to study three related-but-discrete aspects of the avian wing: 1) the aerodynamics of morphing wings during take-off and gliding flight, 2) the presence and significance of wing tip slots across the avian clade, and 3) the aerodynamic role of the emarginate primary feathers that form these wing tip slots. These experiments ask fundamental questions that have intrigued me since childhood: Why do birds have different wing shapes? And why do some birds have slotted wing tips? It's fair to say that you will not find definitive answers here--rather, you will find the methodical, incremental addition of new hypotheses and empirical evidence which will serve future researchers in their own pursuits of these questions. The first chapter explores active wing morphing in two disparate aerodynamic regimes: low-advance ratio flapping (such as during takeoff) and high-advance ratio gliding. This chapter was published in the Journal of Experimental Biology (Klaassen van Oorschot et al., 2016) with the help of an undergraduate researcher, Emily Mistick. We found that wing shape affected performance during flapping but not gliding flight. Extended wings outperformed swept wings by about a third in flapping flight. This finding contrasts previous work that showed wing shape didn't affect performance in flapping flight (Usherwood and Ellington, 2002a, 2002b). This work provided key insights that inspired the second and third chapters of my dissertation. The second chapter examines the significance of wing tip slots across 135 avian species, ranging from small passerines to large seabirds. This research was completed with the help of an undergraduate international researcher, Ho Kwan Tang, and is currently in press at the Journal of Morphology (Klaassen van Oorschot, in press). These slots are caused by asymmetric emarginations missing from the leading and trailing edge of the primary feathers. We used a novel metric of primary feather emargination that allowed us to show that wing tip slots are nearly ubiquitous across the avian clade. We also showed that emargination is segregated according to habitat and behavioral metrics like flight style. Finally, we showed that emargination scaled with mass. These findings illustrated that wing tip slots may be an adaptation for efficacy during vertical takeoff rather than efficiency during gliding flight. In the third chapter, I sought to better understand the function of these slotted primary feathers. In an effort to bridge biology and aeronautics, I collaborated with Richard Choroszucha, an aeronautical engineer from the University of Michigan, on this work. These feathers deflect under aerodynamic load, and it has been hypothesized that they reduce induced drag during gliding flight (Tucker, 1993, 1995). We exposed individual primary feathers to different speeds in the wind tunnel and measured deflection such as bend, twist, and sweep. We found that feather deflection reoriented force, resulting in increased lateral stability and delayed stall characteristics compared to a rigid airfoil. These findings lay the foundation for future biomimetic applications of passive morphing-wing aircraft. I aim to submit this chapter for publication at Bioinspiration & Biomimetics in the summer of 2017. The following dissertation represents my systematic discovery of avian aerodynamics and follows my progression as a scientist. Combined, the following chapters provide novel insight into the complex nature of morphing avian wings.

  10. ESA's Integral discovers hidden black holes

    NASA Astrophysics Data System (ADS)

    2003-10-01

    An artist's impression of the mechanisms in an interacting binar hi-res Size hi-res: 28 kb An artist's impression of the mechanisms in an interacting binary system An artist's impression of the mechanisms in an interacting binary system. The supermassive companion star (on the right-hand side) ejects a lot of gas in the form of 'stellar wind'. The compact black hole orbits the star and, due to its strong gravitational attraction, collects a lot of the gas. Some of it is funnelled and accelerated into a hot disc. This releases a large amount of energy in all spectral bands, from gamma rays through to visible and infrared. However, the remaining gas surrounding the black hole forms a thick cloud which blocks most of the radiation. Only the very energetic gamma rays can escape and be detected by Integral. XMM-Newton spacecraft hi-res Size hi-res: 254 kb Credits: ESA. Illustration by Ducros XMM-Newton spacecraft Detecting the Universe's hot spots. These are binary systems, probably including a black hole or a neutron star, embedded in a thick cocoon of cold gas. They have remained invisible so far to all other telescopes. Integral was launched one year ago to study the most energetic phenomena in the universe. Integral detected the first of these objects, called IGRJ16318-4848, on 29 January 2003. Although astronomers did not know its distance, they were sure it was in our Galaxy. Also, after some analysis, researchers concluded that the new object could be a binary system comprising a compact object, such as a neutron star or a black hole, and a very massive companion star. When gas from the companion star is accelerated and swallowed by the more compact object, energy is released at all wavelengths, from the gamma rays through to visible and infrared light. About 300 binary systems like those are known to exist in our galactic neighbourhood and IGRJ16318-4848 could simply have been one more. But something did not fit: why this particular object had not been discovered so far? Astronomers, who have been observing the object regularly, guess that it had remained invisible because there must be a very thick shell of obscuring material surrounding it. If that was the case, only the most energetic radiation from the object could get through the shell; less-energetic radiation would be blocked. That could explain why space telescopes that are sensitive only to low-energy radiation had overlooked the object, while Integral, specialised in detecting very energetic emissions, did see it. To test their theory, astronomers turned to ESA's XMM-Newton space observatory, which observes the sky in the X-ray wavelengths. As well as being sensitive to high-energy radiation, XMM-Newton is also able to check for the presence of obscuring material. Indeed, XMM-Newton detected this object last February, as well as the existence of a dense 'cocoon' of cold gas with a diameter of similar size to that of the Earth's orbit around the Sun. This obscuring material forming the cocoon is probably 'stellar wind', namely gas ejected by the supermassive companion star. Astronomers think that this gas may be accreted by the compact black hole, forming a dense shell around it. This obscuring cloud traps most of the energy produced inside it. The main author of these results, Roland Walter of the Integral Science Data Centre, Switzerland, explained: "Only photons with the highest energies [above 10 keV] could escape from that cocoon. IGR J16318-4848 has therefore not been detected by surveys performed at lower energies, nor by previous gamma-ray missions that were much less sensitive than Integral." The question now is to find out how many of these objects lurk in the Galaxy. XMM-Newton and Integral together are the perfect tools to do the job. They have already discovered two more new sources embedded in obscuring material. Future observations are planned. Christoph Winkler, ESA Project Scientist for Integral, said: "These early examples of using two complementary ESA high-energy missions, Integral and XMM-Newton, shows the potential for future discoveries in high-energy astrophysics." Notes to Editors: The paper explaining these results will be published in November in a special issue of Astronomy and Astrophysics dedicated to Integral, on the occasion of its first anniversary. Integral The International Gamma Ray Astrophysics Laboratory (Integral) is the first space observatory that can simultaneously observe celestial objects in gamma rays, X-rays and visible light. Integral was launched on a Russian Proton rocket on 17 October 2002 into a highly elliptical orbit around Earth. Its principal targets include regions of the galaxy where chemical elements are being produced and compact objects, such as black holes. XMM-Newton XMM-Newton can detect more X-ray sources than any previous satellite and is helping to solve many cosmic mysteries of the violent Universe, from black holes to the formation of galaxies. It was launched on 10 December 1999, using an Ariane-5 rocket from French Guiana. It is expected to return data for a decade. XMM-Newton's high-tech design uses over 170 wafer-thin cylindrical mirrors spread over three telescopes. Its orbit takes it almost a third of the way to the Moon, so that astronomers can enjoy long, uninterrupted views of celestial objects.

  11. ESA presents INTEGRAL, its space observatory for Gamma-ray astronomy

    NASA Astrophysics Data System (ADS)

    1998-09-01

    A unique opportunity for journalists and cameramen to view INTEGRAL will be provided at ESA/ESTEC, Noordwijk, the Netherlands on Tuesday 22 September. On show will be the full-size structural thermal model which is now beeing examined in ESA's test centre. Following introductions to the project, the INTEGRAL spacecraft can be seen, filmed and photographed in its special clean room environment.. Media representatives wishing to participate in the visit to ESA's test centre and the presentation of INTEGRAL are kindly requested to return by fax the attached registration form to ESA Public relations, Tel. +33 (0) 1.53.69.71.55 - Fax. +33 (0) 1.53.69.76.90. For details please see the attached programme Gamma-ray astronomy - why ? Gamma-rays cannot be detected from the ground since the earth's atmosphere shields us from high energetic radiation. Only space technology has made gamma-astronomy possible. To avoid background radiation effects INTEGRAL will spend most of its time in the orbit outside earth's radiation belts above an altitude of 40'000 km. Gamma-rays are the highest energy form of electromagnetic radiation. Therefore gamma-ray astronomy explores the most energetic phenomena occurring in nature and addresses some of the most fundamental problems in physics. We know for instance that most of the chemical elements in our bodies come from long-dead stars. But how were these elements formed? INTEGRAL will register gamma-ray evidence of element-making. Gamma-rays also appear when matter squirms in the intense gravity of collapsed stars or black holes. One of the most important scientific objectives of INTEGRAL is to study such compact objects as neutron stars or black holes. Besides stellar black holes there may exist much bigger specimens of these extremely dense objects. Most astronomers believe that in the heart of our Milky Way as in the centre of other galaxies there may lurk giant black holes. INTEGRAL will have to find evidence of these exotic objects. Even more strange than the energetic radiation coming from the centre of distant galaxies are flashes of extremely powerful radiation that suddenly appear somewhere on the gamma-sky and disappear again after a short time. These gamma-bursts seem to be the biggest observed explosions in the Universe. But nobody knows their source. Integral will help to solve this long-standing mystery. ESA, the pioneer in gamma-ray astronomy The satellite as it can now be seen at ESA's test centre is five meters high and weighs more than four tonnes. Two main instruments observe the gamma-rays. An imager will give the sharpest gamma-ray images. It is provided by a consortium led by an Italian scientist. Gamma-rays ignore lenses and mirror, so INTEGRAL makes its images with so-called coded-masks. A coded-mask telescope is basically a pinhole camera, but with a larger aperture, i.e. many pinholes. A spectrometer will gauge gamma-ray energies extremely precisely. It is developed by a team of scientists under joint French-German leadership and will be a 100 times more sensitive than the previous high spectral resolution space instrument. It is made of a high-purity Germanium detector that has to be cooled down to minus 188 degree Celsius. These two gamma-ray-instruments are supported by two monitor instruments that play a crucial role in the detection and identification of the gamma-ray sources. An X-ray monitor developed in Denmark will observe X-rays, still powerful but less energetic than gamma-rays. An optical telescope provided by Spain will observe the visible light emitted by the energetic objects. Switzerland will host the Integral Science Data Centre which will preprocess and distribute the scientific data. The mission is conceived as an observatory led by ESA with Russia contributing the launcher and NASA providing tracking support with its Deep Space Network. Alenia Aerospazio in Turin, Italy is ESA's prime contractor for building INTEGRAL. Launch by a Russian Proton rocket from Baikonur is actually scheduled for 2001. ESA pioneered gamma-ray astronomy in space with its COS-B satellite (1975). Russia's Granat (1989) and NASA's Compton GRO (1991) followed. But INTEGRAL will be better still. With this mission ESA will further strengthen its lead in gamma-astronomy. Principal Investigators : Imager : Pietro Ubertini (IAS, Frascati, Italy) Spectrometer : Gilbert Vedrenne (CESR, Toulouse/France) Volker Schoenfelder (MPE, Garching/.Germany) X-Ray monitor : Niels Lund (DSRI, Copenhagen/Denmark) Optical Monitoring Camera : Alvaro Gimenez (INTA, Madrid/Spain) Integral Science Data Center : Thierry Courvoisier (Genova Observatory, Switzerland) For further information, please contact : ESA Public Relations Division Tel: +33(0)1.53.69.71.55 Fax: +33(0)1.53.69.76.90 INTEGRAL MEDIA DAY Tuesday 22 September 1998 Newton Conference Centre ESTEC, Noordwijk, Keplerlaan 1 (The Netherlands) Programme 10:30 . Arrival and Registration in the Newton Conference Centre 10:45. Welcome and introduction by David Dale, Director of ESTEC 10:50 The Scientific Challenge : the mission of INTEGRAL, by Chistoph Winkler, INTEGRAL Project Scientist 11:10 The Technical Challenge : the INTEGRAL spacecraft, by Kai Clausen, INTEGRAL Project Manager 11:30 The Industrial Challenge by A. Simeone, Programme Director at Aleniaspazio 11:45 Question/Answer session 12:00 Visit to INTEGRAL spacecraft ; photo and film opportunities, incl. Interview opportunities with speakers 13:00 Informal buffet lunch in Foyer of Conference Centre Newton 14:30 End of event

  12. PREFACE: The Eighth Liquid Matter Conference The Eighth Liquid Matter Conference

    NASA Astrophysics Data System (ADS)

    Dellago, Christoph; Kahl, Gerhard; Likos, Christos N.

    2012-07-01

    The Eighth Liquid Matter Conference (LMC8) was held at the Universität Wien from 6-10 September 2011. Initiated in 1990, the conferences of this series cover a broad range of highly interdisciplinary topics, ranging from simple liquids to soft matter and biophysical systems. The vast spectrum of scientific subjects presented and discussed at the LMC8 is reflected in the themes of the ten symposia: Ionic and quantum liquids, liquid metals Water, solutions and reaction dynamics Liquid crystals Polymers, polyelectrolytes, biopolymers Colloids Films, foams, surfactants, emulsions, aerosols Confined fluids, interfacial phenomena Supercooled liquids, glasses, gels Non-equilibrium systems, rheology, nanofluids Biofluids, active matter This special issue contains scientific papers, authored by participants of the LMC8, which provide a cross-section of the scientific activities in current liquid matter science, as discussed at the conference, and demonstrate the scientific as well as methodological progress made in this field over the past couple of years. The Eighth Liquid Matter Conference contents The Eighth Liquid Matter ConferenceChristoph Dellago, Gerhard Kahl and Christos N Likos Comparing light-induced colloidal quasicrystals with different rotational symmetriesMichael Schmiedeberg and Holger Stark Hydrogen bond network relaxation in aqueous polyelectrolyte solutions: the effect of temperatureS Sarti, D Truzzolillo and F Bordi Equilibrium concentration profiles and sedimentation kinetics of colloidal gels under gravitational stressS Buzzaccaro, E Secchi, G Brambilla, R Piazza and L Cipelletti The capillary interaction between two vertical cylindersHimantha Cooray, Pietro Cicuta and Dominic Vella Hydrodynamic and viscoelastic effects in polymer diffusionJ Farago, H Meyer, J Baschnagel and A N Semenov A density-functional theory study of microphase formation in binary Gaussian mixturesM Carta, D Pini, A Parola and L Reatto Microcanonical determination of the interface tension of flat and curved interfaces from Monte Carlo simulationsA Tröster and K Binder Phase diagrams of particles with dissimilar patches: X-junctions and Y-junctionsJ M Tavares and P I C Teixeira The unbearable heaviness of colloids: facts, surprises, and puzzles in sedimentationRoberto Piazza, Stefano Buzzaccaro and Eleonora Secchi Exploring water and other liquids at negative pressureFrédéric Caupin, Arnaud Arvengas, Kristina Davitt, Mouna El Mekki Azouzi, Kirill I Shmulovich, Claire Ramboz, David A Sessoms and Abraham D Stroock The configurational space of colloidal patchy polymers with heterogeneous sequencesIvan Coluzza and Christoph Dellago Repeated sorption of water in SBA-15 investigated by means of in situ small-angle x-ray scatteringM Erko, D Wallacher, G H Findenegg and O Paris Transition of the hydration state of a surfactant accompanying structural transitions of self-assembled aggregatesM Hishida and K Tanaka The effects of topology on the structural, dynamic and mechanical properties of network-forming materialsMark Wilson Surface tension of an electrolyte-air interface: a Monte Carlo studyAlexandre Diehl, Alexandre P dos Santos and Yan Levin Water and other tetrahedral liquids: order, anomalies and solvationB Shadrack Jabes, Divya Nayar, Debdas Dhabal, Valeria Molinero and Charusita Chakravarty Diffusion coefficient and shear viscosity of rigid water modelsSami Tazi, Alexandru Boţan, Mathieu Salanne, Virginie Marry, Pierre Turq and Benjamin Rotenberg Phase behaviour of colloidal assemblies on 2D corrugated substratesSamir El Shawish, Emmanuel Trizac and Jure Dobnikar Structural properties of dendrimer-colloid mixturesDominic A Lenz, Ronald Blaak and Christos N Likos Fluid-fluid demixing of off-critical colloid-polymer systems confined between parallel platesE A G Jamie, R P A Dullens and D G A L Aarts Simulations of nematic homopolymer melts using particle-based models with interactions expressed through collective variablesKostas Ch Daoulas, Victor Rühle and Kurt Kremer Smectic shellsTeresa Lopez-Leon, Alberto Fernandez-Nieves, Maurizio Nobili and Christophe Blanc Intrinsic profiles and the structure of liquid surfacesP Tarazona, E Chacón and F Bresme Competing ordered structures formed by particles with a regular tetrahedral patch decorationGünther Doppelbauer, Eva G Noya, Emanuela Bianchi and Gerhard Kahl Heterogeneous crystallization in colloids and complex plasmas: the role of binary mobilitiesH Löwen, E Allahyarov, A Ivlev and G E Morfill Isotope effects in water as investigated by neutron diffraction and path integral molecular dynamicsAnita Zeidler, Philip S Salmon, Henry E Fischer, Jörg C Neuefeind, J Mike Simonson and Thomas E Markland Confined cubic blue phases under shearO Henrich, K Stratford, D Marenduzzo, P V Coveney and M E Cates Depletion-induced biaxial nematic states of boardlike particlesS Belli, M Dijkstra and R van Roij Active Brownian motion tunable by lightIvo Buttinoni, Giovanni Volpe, Felix Kümmel, Giorgio Volpe and Clemens Bechinger Structure and stability of charged clustersMark A Miller, David A Bonhommeau, Christopher J Heard, Yuyoung Shin, Riccardo Spezia and Marie-Pierre Gaigeot Non-equilibrium relaxation and tumbling times of polymers in semidilute solutionChien-Cheng Huang, Gerhard Gompper and Roland G Winkler Thermophoresis of colloids by mesoscale simulationsDaniel Lüsebrink, Mingcheng Yang and Marisol Ripoll Computing the local pressure in molecular dynamics simulationsThomas W Lion and Rosalind J Allen Gradient-driven fluctuations in microgravityA Vailati, R Cerbino, S Mazzoni, M Giglio, C J Takacs and D S Cannell

  13. Spatially explicit exposure assessment for small streams in catchments of the orchard growing region `Lake Constance

    NASA Astrophysics Data System (ADS)

    Golla, B.; Bach, M.; Krumpe, J.

    2009-04-01

    1. Introduction Small streams differ greatly from the standardised water body used in the context of aquatic risk assessment for the regulation of plant protection products in Germany. The standard water body is static, with a depth of 0.3 m and a width of 1.0 m. No dilution or water replacement takes place. Spray drift happens always in direction to the water body. There is no variability in drift deposition rate (90th percentile spray drift deposition values [2]). There is no spray drift filtering by vegetation. The application takes place directly adjacent to the water body. In order to establish a more realistic risk assessment procedure the Federal Office for Consumer Protection and Food Safety (BVL) and the Federal Environment Agency (UBA) aggreed to replace deterministic assumptions with data distributions and spatially explicit data and introduce probabilistic methods [3, 4, 5]. To consider the spatial and temporal variability in the exposure situations of small streams the hydraulic and morphological characteristics of catchments need to be described as well as the spatial distribution of fields treated with pesticides. As small streams are the dominant type of water body in most German orchard regions, we use the growing region Lake Constance as pilot region. 2. Materials and methods During field surveys we derive basic morphological parameters for small streams in the Lake Constance region. The mean water width/depth ratio is 13 with a mean depth of 0.12 m. The average residence time is 5.6 s/m (n=87) [1]. Orchards are mostly located in the upper parts of the catchments. Based on an authoritative dataset on rivers and streams of Germany (ATKIS DLM25) we constructed a directed network topology for the Lake Constance region. The gradient of the riverbed is calculated for river stretches of > 500 m length. The network for the pilot region consists of 2000 km rivers and streams. 500 km stream length are located within a distance of 150 m to orchards. Within this distance a spray drift exposure with adverse effects is theoretically possible [6]. The network is segmented to approx. 80'000 segments of 25 m length. One segment is the basic element of the exposure assessment. Based on the Manning-Strickler formula and empirically determined relations two equations are developed to express the width and depth of the streams and the flow velocity [7]. Using Java programming and spatial network analysis within Oracle 10g/Spatial DBMS we developed a tool to simulate concentration over time for all single 25 m segments of the stream network. The analysis considers the spatially explicit upstream exposure situations due to the locations of orchards and recovery areas in the catchments. The application which takes place on a specific orchard is simulated according to realistic application patterns or to the simplistic assumption that all orchards are sprayed on the same day. 3. Results The results of the analysis are distributions of time average concentrations (mPEC) for all single stream segments of the stream network. The averaging time window can be defined flexibly between 1 h (mPEC1h) to 24 h (mPEC24h). Spatial network analysis based on georeferenced hydraulic and morphological parameters proved to be a suitable approach for analysing the exposure situation of streams under more realistic aspects. The time varying concentration of single stream segments can be analysed over a vegetation period or a single day. Stream segments which exceed a trigger concentration or segments with a specific pulse concentration pattern in given time windows can be identified and be addressed by e.g. implementing additional drift mitigation measures. References [1] Golla, B., J. Krumpe, J. Strassemeyer, and V. Gutsche. (2008): Refined exposure assessment of small streams in German orchard regions. Part 1. Results of a hydromorphological survey. Journal für Kulturpflanzen (submitted). [2] Rautmann, D., Streloke, M, and Winkler, R (1999): New basic drift values in the authorization procedure for plant protection products, pp. 133-141. In Workshop on risk management and risk mitigation measures in the context of authorization of plant protection products [3] Klein, A. W., Dechet, F., and Streloke, M (2003): Probabilistic Assessment Method for Risk Analysis in the framework of Plant Protection Product Authorisation, Industrieverband Agrar (IVA, 2006), Frankfurt/Main [4] Schulz R, Stehle S, Elsaesser F, Matezki S, Müller A, Neumann M, Ohliger R, Wogram J, Zenker K. 2008. Geodata-based Probabilistic Risk Assessment and Management of Pesticides in Germany, a Conceptual Framework. IEAM_2008-032R [5] Kubiak, R., Hommen, Bach, M., Classen, G. Fent, H.-G. Frede, A. Gergs, B. Golla, M. Klein, J. Krumpe, S. Matetzki, A. Müller, M. Neumann,T. G. Preuss, H. T. Ratte, M. Roß-Nickoll, S. Reichenberger, C. Schäfers, T. Strauss, A. Toschki, M. Trapp, J. Wogram (2009): A new GIS based approach for the assessment and management of environmental risks of plant protection, SETAC EUROPE Göteborg [6] Enzian, S. ,Golla., B. (2006) A method for the identification and classification of "save distance" cropland to the potential drift exposure of pesticides towards surface waters. UBA-Texte [7] Bach, M., Träbing, K. and Frede, H.-G. (2004): Morphological Characteristics of small rivers in the context of probabilistic exposure assessment. Nachrichtenblatt des Deutschen Pflanzenschutzdienstes 56

  14. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    NASA Astrophysics Data System (ADS)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  15. Extensible packet processing architecture

    DOEpatents

    Robertson, Perry J.; Hamlet, Jason R.; Pierson, Lyndon G.; Olsberg, Ronald R.; Chun, Guy D.

    2013-08-20

    A technique for distributed packet processing includes sequentially passing packets associated with packet flows between a plurality of processing engines along a flow through data bus linking the plurality of processing engines in series. At least one packet within a given packet flow is marked by a given processing engine to signify by the given processing engine to the other processing engines that the given processing engine has claimed the given packet flow for processing. A processing function is applied to each of the packet flows within the processing engines and the processed packets are output on a time-shared, arbitered data bus coupled to the plurality of processing engines.

  16. Processed and ultra-processed foods are associated with lower-quality nutrient profiles in children from Colombia.

    PubMed

    Cornwell, Brittany; Villamor, Eduardo; Mora-Plazas, Mercedes; Marin, Constanza; Monteiro, Carlos A; Baylin, Ana

    2018-01-01

    To determine if processed and ultra-processed foods consumed by children in Colombia are associated with lower-quality nutrition profiles than less processed foods. We obtained information on sociodemographic and anthropometric variables and dietary information through dietary records and 24 h recalls from a convenience sample of the Bogotá School Children Cohort. Foods were classified into three categories: (i) unprocessed and minimally processed foods, (ii) processed culinary ingredients and (iii) processed and ultra-processed foods. We also examined the combination of unprocessed foods and processed culinary ingredients. Representative sample of children from low- to middle-income families in Bogotá, Colombia. Children aged 5-12 years in 2011 Bogotá School Children Cohort. We found that processed and ultra-processed foods are of lower dietary quality in general. Nutrients that were lower in processed and ultra-processed foods following adjustment for total energy intake included: n-3 PUFA, vitamins A, B12, C and E, Ca and Zn. Nutrients that were higher in energy-adjusted processed and ultra-processed foods compared with unprocessed foods included: Na, sugar and trans-fatty acids, although we also found that some healthy nutrients, including folate and Fe, were higher in processed and ultra-processed foods compared with unprocessed and minimally processed foods. Processed and ultra-processed foods generally have unhealthy nutrition profiles. Our findings suggest the categorization of foods based on processing characteristics is promising for understanding the influence of food processing on children's dietary quality. More studies accounting for the type and degree of food processing are needed.

  17. Dynamic control of remelting processes

    DOEpatents

    Bertram, Lee A.; Williamson, Rodney L.; Melgaard, David K.; Beaman, Joseph J.; Evans, David G.

    2000-01-01

    An apparatus and method of controlling a remelting process by providing measured process variable values to a process controller; estimating process variable values using a process model of a remelting process; and outputting estimated process variable values from the process controller. Feedback and feedforward control devices receive the estimated process variable values and adjust inputs to the remelting process. Electrode weight, electrode mass, electrode gap, process current, process voltage, electrode position, electrode temperature, electrode thermal boundary layer thickness, electrode velocity, electrode acceleration, slag temperature, melting efficiency, cooling water temperature, cooling water flow rate, crucible temperature profile, slag skin temperature, and/or drip short events are employed, as are parameters representing physical constraints of electroslag remelting or vacuum arc remelting, as applicable.

  18. On Intelligent Design and Planning Method of Process Route Based on Gun Breech Machining Process

    NASA Astrophysics Data System (ADS)

    Hongzhi, Zhao; Jian, Zhang

    2018-03-01

    The paper states an approach of intelligent design and planning of process route based on gun breech machining process, against several problems, such as complex machining process of gun breech, tedious route design and long period of its traditional unmanageable process route. Based on gun breech machining process, intelligent design and planning system of process route are developed by virtue of DEST and VC++. The system includes two functional modules--process route intelligent design and its planning. The process route intelligent design module, through the analysis of gun breech machining process, summarizes breech process knowledge so as to complete the design of knowledge base and inference engine. And then gun breech process route intelligently output. On the basis of intelligent route design module, the final process route is made, edited and managed in the process route planning module.

  19. Gasoline from coal in the state of Illinois: feasibility study. Volume I. Design. [KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-01-01

    Volume 1 describes the proposed plant: KBW gasification process, ICI low-pressure methanol process and Mobil M-gasoline process, and also with ancillary processes, such as oxygen plant, shift process, RECTISOL purification process, sulfur recovery equipment and pollution control equipment. Numerous engineering diagrams are included. (LTN)

  20. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  1. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  2. Situation awareness acquired from monitoring process plants - the Process Overview concept and measure.

    PubMed

    Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd

    2016-07-01

    We introduce Process Overview, a situation awareness characterisation of the knowledge derived from monitoring process plants. Process Overview is based on observational studies of process control work in the literature. The characterisation is applied to develop a query-based measure called the Process Overview Measure. The goal of the measure is to improve coupling between situation and awareness according to process plant properties and operator cognitive work. A companion article presents the empirical evaluation of the Process Overview Measure in a realistic process control setting. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA based on data collected by process experts. Practitioner Summary: The Process Overview Measure is a query-based measure for assessing operator situation awareness from monitoring process plants in representative settings.

  3. 43 CFR 2804.19 - How will BLM process my Processing Category 6 application?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false How will BLM process my Processing... process my Processing Category 6 application? (a) For Processing Category 6 applications, you and BLM must enter into a written agreement that describes how BLM will process your application. The final agreement...

  4. 43 CFR 2804.19 - How will BLM process my Processing Category 6 application?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false How will BLM process my Processing... process my Processing Category 6 application? (a) For Processing Category 6 applications, you and BLM must enter into a written agreement that describes how BLM will process your application. The final agreement...

  5. Process Correlation Analysis Model for Process Improvement Identification

    PubMed Central

    Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data. PMID:24977170

  6. Process correlation analysis model for process improvement identification.

    PubMed

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  7. Cleanliness of Ti-bearing Al-killed ultra-low-carbon steel during different heating processes

    NASA Astrophysics Data System (ADS)

    Guo, Jian-long; Bao, Yan-ping; Wang, Min

    2017-12-01

    During the production of Ti-bearing Al-killed ultra-low-carbon (ULC) steel, two different heating processes were used when the converter tapping temperature or the molten steel temperature in the Ruhrstahl-Heraeus (RH) process was low: heating by Al addition during the RH decarburization process and final deoxidation at the end of the RH decarburization process (process-I), and increasing the oxygen content at the end of RH decarburization, heating and final deoxidation by one-time Al addition (process-II). Temperature increases of 10°C by different processes were studied; the results showed that the two heating processes could achieve the same heating effect. The T.[O] content in the slab and the refining process was better controlled by process-I than by process-II. Statistical analysis of inclusions showed that the numbers of inclusions in the slab obtained by process-I were substantially less than those in the slab obtained by process-II. For process-I, the Al2O3 inclusions produced by Al added to induce heating were substantially removed at the end of decarburization. The amounts of inclusions were substantially greater for process-II than for process-I at different refining stages because of the higher dissolved oxygen concentration in process-II. Industrial test results showed that process-I was more beneficial for improving the cleanliness of molten steel.

  8. Application of agent-based system for bioprocess description and process improvement.

    PubMed

    Gao, Ying; Kipling, Katie; Glassey, Jarka; Willis, Mark; Montague, Gary; Zhou, Yuhong; Titchener-Hooker, Nigel J

    2010-01-01

    Modeling plays an important role in bioprocess development for design and scale-up. Predictive models can also be used in biopharmaceutical manufacturing to assist decision-making either to maintain process consistency or to identify optimal operating conditions. To predict the whole bioprocess performance, the strong interactions present in a processing sequence must be adequately modeled. Traditionally, bioprocess modeling considers process units separately, which makes it difficult to capture the interactions between units. In this work, a systematic framework is developed to analyze the bioprocesses based on a whole process understanding and considering the interactions between process operations. An agent-based approach is adopted to provide a flexible infrastructure for the necessary integration of process models. This enables the prediction of overall process behavior, which can then be applied during process development or once manufacturing has commenced, in both cases leading to the capacity for fast evaluation of process improvement options. The multi-agent system comprises a process knowledge base, process models, and a group of functional agents. In this system, agent components co-operate with each other in performing their tasks. These include the description of the whole process behavior, evaluating process operating conditions, monitoring of the operating processes, predicting critical process performance, and providing guidance to decision-making when coping with process deviations. During process development, the system can be used to evaluate the design space for process operation. During manufacture, the system can be applied to identify abnormal process operation events and then to provide suggestions as to how best to cope with the deviations. In all cases, the function of the system is to ensure an efficient manufacturing process. The implementation of the agent-based approach is illustrated via selected application scenarios, which demonstrate how such a framework may enable the better integration of process operations by providing a plant-wide process description to facilitate process improvement. Copyright 2009 American Institute of Chemical Engineers

  9. Electricity from sunlight. [low cost silicon for solar cells

    NASA Technical Reports Server (NTRS)

    Yaws, C. L.; Miller, J. W.; Lutwack, R.; Hsu, G.

    1978-01-01

    The paper discusses a number of new unconventional processes proposed for the low-cost production of silicon for solar cells. Consideration is given to: (1) the Battelle process (Zn/SiCl4), (2) the Battelle process (SiI4), (3) the Silane process, (4) the Motorola process (SiF4/SiF2), (5) the Westinghouse process (Na/SiCl4), (6) the Dow Corning process (C/SiO2), (7) the AeroChem process (SiCl4/H atom), and the Stanford process (Na/SiF4). Preliminary results indicate that the conventional process and the SiI4 processes cannot meet the project goal of $10/kg by 1986. Preliminary cost evaluation results for the Zn/SiCl4 process are favorable.

  10. Composing Models of Geographic Physical Processes

    NASA Astrophysics Data System (ADS)

    Hofer, Barbara; Frank, Andrew U.

    Processes are central for geographic information science; yet geographic information systems (GIS) lack capabilities to represent process related information. A prerequisite to including processes in GIS software is a general method to describe geographic processes independently of application disciplines. This paper presents such a method, namely a process description language. The vocabulary of the process description language is derived formally from mathematical models. Physical processes in geography can be described in two equivalent languages: partial differential equations or partial difference equations, where the latter can be shown graphically and used as a method for application specialists to enter their process models. The vocabulary of the process description language comprises components for describing the general behavior of prototypical geographic physical processes. These process components can be composed by basic models of geographic physical processes, which is shown by means of an example.

  11. Process-based tolerance assessment of connecting rod machining process

    NASA Astrophysics Data System (ADS)

    Sharma, G. V. S. S.; Rao, P. Srinivasa; Surendra Babu, B.

    2016-06-01

    Process tolerancing based on the process capability studies is the optimistic and pragmatic approach of determining the manufacturing process tolerances. On adopting the define-measure-analyze-improve-control approach, the process potential capability index ( C p) and the process performance capability index ( C pk) values of identified process characteristics of connecting rod machining process are achieved to be greater than the industry benchmark of 1.33, i.e., four sigma level. The tolerance chain diagram methodology is applied to the connecting rod in order to verify the manufacturing process tolerances at various operations of the connecting rod manufacturing process. This paper bridges the gap between the existing dimensional tolerances obtained via tolerance charting and process capability studies of the connecting rod component. Finally, the process tolerancing comparison has been done by adopting a tolerance capability expert software.

  12. Intranode data communications in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

    2014-01-07

    Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a computer node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

  13. Intranode data communications in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

    2013-07-23

    Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

  14. Canadian Libraries and Mass Deacidification.

    ERIC Educational Resources Information Center

    Pacey, Antony

    1992-01-01

    Considers the advantages and disadvantages of six mass deacidification processes that libraries can use to salvage printed materials: the Wei T'o process, the Diethyl Zinc (DEZ) process, the FMC (Lithco) process, the Book Preservation Associates (BPA) process, the "Bookkeeper" process, and the "Lyophilization" process. The…

  15. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  16. Depth-of-processing effects on priming in stem completion: tests of the voluntary-contamination, conceptual-processing, and lexical-processing hypotheses.

    PubMed

    Richardson-Klavehn, A; Gardiner, J M

    1998-05-01

    Depth-of-processing effects on incidental perceptual memory tests could reflect (a) contamination by voluntary retrieval, (b) sensitivity of involuntary retrieval to prior conceptual processing, or (c) a deficit in lexical processing during graphemic study tasks that affects involuntary retrieval. The authors devised an extension of incidental test methodology--making conjunctive predictions about response times as well as response proportions--to discriminate among these alternatives. They used graphemic, phonemic, and semantic study tasks, and a word-stem completion test with incidental, intentional, and inclusion instructions. Semantic study processing was superior to phonemic study processing in the intentional and inclusion tests, but semantic and phonemic study processing produced equal priming in the incidental test, showing that priming was uncontaminated by voluntary retrieval--a conclusion reinforced by the response-time data--and that priming was insensitive to prior conceptual processing. The incidental test nevertheless showed a priming deficit following graphemic study processing, supporting the lexical-processing hypothesis. Adding a lexical decision to the 3 study tasks eliminated the priming deficit following graphemic study processing, but did not influence priming following phonemic and semantic processing. The results provide the first clear evidence that depth-of-processing effects on perceptual priming can reflect lexical processes, rather than voluntary contamination or conceptual processes.

  17. Improving operational anodising process performance using simulation approach

    NASA Astrophysics Data System (ADS)

    Liong, Choong-Yeun; Ghazali, Syarah Syahidah

    2015-10-01

    The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist of five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.

  18. Value-driven process management: using value to improve processes.

    PubMed

    Melnyk, S A; Christensen, R T

    2000-08-01

    Every firm can be viewed as consisting of various processes. These processes affect everything that the firm does from accepting orders and designing products to scheduling production. In many firms, the management of processes often reflects considerations of efficiency (cost) rather than effectiveness (value). In this article, we introduce a well-structured process for managing processes that begins not with the process, but rather with the customer and the product and the concept of value. This process progresses through a number of steps which include issues such as defining value, generating the appropriate metrics, identifying the critical processes, mapping and assessing the performance of these processes, and identifying long- and short-term areas for action. What makes the approach presented in this article so powerful is that it explicitly links the customer to the process and that the process is evaluated in term of its ability to effectively serve the customers.

  19. Method for routing events from key strokes in a multi-processing computer systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhodes, D.A.; Rustici, E.; Carter, K.H.

    1990-01-23

    The patent describes a method of routing user input in a computer system which concurrently runs a plurality of processes. It comprises: generating keycodes representative of keys typed by a user; distinguishing generated keycodes by looking up each keycode in a routing table which assigns each possible keycode to an individual assigned process of the plurality of processes, one of which processes being a supervisory process; then, sending each keycode to its assigned process until a keycode assigned to the supervisory process is received; sending keycodes received subsequent to the keycode assigned to the supervisory process to a buffer; next,more » providing additional keycodes to the supervisory process from the buffer until the supervisory process has completed operation; and sending keycodes stored in the buffer to processes assigned therewith after the supervisory process has completedoperation.« less

  20. Issues Management Process Course # 38401

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binion, Ula Marie

    The purpose of this training it to advise Issues Management Coordinators (IMCs) on the revised Contractor Assurance System (CAS) Issues Management (IM) process. Terminal Objectives: Understand the Laboratory’s IM process; Understand your role in the Laboratory’s IM process. Learning Objectives: Describe the IM process within the context of the CAS; Describe the importance of implementing an institutional IM process at LANL; Describe the process flow for the Laboratory’s IM process; Apply the definition of an issue; Use available resources to determine initial screening risk levels for issues; Describe the required major process steps for each risk level; Describe the personnelmore » responsibilities for IM process implementation; Access available resources to support IM process implementation.« less

  1. Social network supported process recommender system.

    PubMed

    Ye, Yanming; Yin, Jianwei; Xu, Yueshen

    2014-01-01

    Process recommendation technologies have gained more and more attention in the field of intelligent business process modeling to assist the process modeling. However, most of the existing technologies only use the process structure analysis and do not take the social features of processes into account, while the process modeling is complex and comprehensive in most situations. This paper studies the feasibility of social network research technologies on process recommendation and builds a social network system of processes based on the features similarities. Then, three process matching degree measurements are presented and the system implementation is discussed subsequently. Finally, experimental evaluations and future works are introduced.

  2. [Definition and stabilization of processes I. Management processes and support in a Urology Department].

    PubMed

    Pascual, Carlos; Luján, Marcos; Mora, José Ramón; Chiva, Vicente; Gamarra, Manuela

    2015-01-01

    The implantation of total quality management models in clinical departments can better adapt to the 2009 ISO 9004 model. An essential part of implantation of these models is the establishment of processes and their stabilization. There are four types of processes: key, management, support and operative (clinical). Management processes have four parts: process stabilization form, process procedures form, medical activities cost estimation form and, process flow chart. In this paper we will detail the creation of an essential process in a surgical department, such as the process of management of the surgery waiting list.

  3. T-Check in Technologies for Interoperability: Business Process Management in a Web Services Context

    DTIC Science & Technology

    2008-09-01

    UML Sequence Diagram) 6  Figure 3:   BPMN Diagram of the Order Processing Business Process 9  Figure 4:   T-Check Process for Technology Evaluation 10...Figure 5:  Notional System Architecture 12  Figure 6:  Flow Chart of the Order Processing Business Process 14  Figure 7:  Order Processing Activities...features. Figure 3 (created with Intalio BPMS Designer [Intalio 2008]) shows a BPMN view of the Order Processing business process that is used in the

  4. A case study: application of statistical process control tool for determining process capability and sigma level.

    PubMed

    Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona

    2012-01-01

    Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.

  5. A practical approach for exploration and modeling of the design space of a bacterial vaccine cultivation process.

    PubMed

    Streefland, M; Van Herpen, P F G; Van de Waterbeemd, B; Van der Pol, L A; Beuvery, E C; Tramper, J; Martens, D E; Toft, M

    2009-10-15

    A licensed pharmaceutical process is required to be executed within the validated ranges throughout the lifetime of product manufacturing. Changes to the process, especially for processes involving biological products, usually require the manufacturer to demonstrate that the safety and efficacy of the product remains unchanged by new or additional clinical testing. Recent changes in the regulations for pharmaceutical processing allow broader ranges of process settings to be submitted for regulatory approval, the so-called process design space, which means that a manufacturer can optimize his process within the submitted ranges after the product has entered the market, which allows flexible processes. In this article, the applicability of this concept of the process design space is investigated for the cultivation process step for a vaccine against whooping cough disease. An experimental design (DoE) is applied to investigate the ranges of critical process parameters that still result in a product that meets specifications. The on-line process data, including near infrared spectroscopy, are used to build a descriptive model of the processes used in the experimental design. Finally, the data of all processes are integrated in a multivariate batch monitoring model that represents the investigated process design space. This article demonstrates how the general principles of PAT and process design space can be applied for an undefined biological product such as a whole cell vaccine. The approach chosen for model development described here, allows on line monitoring and control of cultivation batches in order to assure in real time that a process is running within the process design space.

  6. Processing approaches to cognition: the impetus from the levels-of-processing framework.

    PubMed

    Roediger, Henry L; Gallo, David A; Geraci, Lisa

    2002-01-01

    Processing approaches to cognition have a long history, from act psychology to the present, but perhaps their greatest boost was given by the success and dominance of the levels-of-processing framework. We review the history of processing approaches, and explore the influence of the levels-of-processing approach, the procedural approach advocated by Paul Kolers, and the transfer-appropriate processing framework. Processing approaches emphasise the procedures of mind and the idea that memory storage can be usefully conceptualised as residing in the same neural units that originally processed information at the time of encoding. Processing approaches emphasise the unity and interrelatedness of cognitive processes and maintain that they can be dissected into separate faculties only by neglecting the richness of mental life. We end by pointing to future directions for processing approaches.

  7. Global Sensitivity Analysis for Process Identification under Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.

    2015-12-01

    The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.

  8. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  9. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  10. Social Network Supported Process Recommender System

    PubMed Central

    Ye, Yanming; Yin, Jianwei; Xu, Yueshen

    2014-01-01

    Process recommendation technologies have gained more and more attention in the field of intelligent business process modeling to assist the process modeling. However, most of the existing technologies only use the process structure analysis and do not take the social features of processes into account, while the process modeling is complex and comprehensive in most situations. This paper studies the feasibility of social network research technologies on process recommendation and builds a social network system of processes based on the features similarities. Then, three process matching degree measurements are presented and the system implementation is discussed subsequently. Finally, experimental evaluations and future works are introduced. PMID:24672309

  11. A model for process representation and synthesis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thomas, R. H.

    1971-01-01

    The problem of representing groups of loosely connected processes is investigated, and a model for process representation useful for synthesizing complex patterns of process behavior is developed. There are three parts, the first part isolates the concepts which form the basis for the process representation model by focusing on questions such as: What is a process; What is an event; Should one process be able to restrict the capabilities of another? The second part develops a model for process representation which captures the concepts and intuitions developed in the first part. The model presented is able to describe both the internal structure of individual processes and the interface structure between interacting processes. Much of the model's descriptive power derives from its use of the notion of process state as a vehicle for relating the internal and external aspects of process behavior. The third part demonstrates by example that the model for process representation is a useful one for synthesizing process behavior patterns. In it the model is used to define a variety of interesting process behavior patterns. The dissertation closes by suggesting how the model could be used as a semantic base for a very potent language extension facility.

  12. Process and Post-Process: A Discursive History.

    ERIC Educational Resources Information Center

    Matsuda, Paul Kei

    2003-01-01

    Examines the history of process and post-process in composition studies, focusing on ways in which terms, such as "current-traditional rhetoric,""process," and "post-process" have contributed to the discursive construction of reality. Argues that use of the term post-process in the context of second language writing needs to be guided by a…

  13. Improving operational anodising process performance using simulation approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liong, Choong-Yeun, E-mail: lg@ukm.edu.my; Ghazali, Syarah Syahidah, E-mail: syarah@gapps.kptm.edu.my

    The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist ofmore » five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.« less

  14. Feller processes: the next generation in modeling. Brownian motion, Lévy processes and beyond.

    PubMed

    Böttcher, Björn

    2010-12-03

    We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes.

  15. Feller Processes: The Next Generation in Modeling. Brownian Motion, Lévy Processes and Beyond

    PubMed Central

    Böttcher, Björn

    2010-01-01

    We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular Brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes. PMID:21151931

  16. AIRSAR Automated Web-based Data Processing and Distribution System

    NASA Technical Reports Server (NTRS)

    Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen

    2005-01-01

    In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.

  17. Simplified process model discovery based on role-oriented genetic mining.

    PubMed

    Zhao, Weidong; Liu, Xi; Dai, Weihui

    2014-01-01

    Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.

  18. Electrotechnologies to process foods

    USDA-ARS?s Scientific Manuscript database

    Electrical energy is being used to process foods. In conventional food processing plants, electricity drives mechanical devices and controls the degree of process. In recent years, several processing technologies are being developed to process foods directly with electricity. Electrotechnologies use...

  19. Challenges associated with the implementation of the nursing process: A systematic review.

    PubMed

    Zamanzadeh, Vahid; Valizadeh, Leila; Tabrizi, Faranak Jabbarzadeh; Behshid, Mojghan; Lotfi, Mojghan

    2015-01-01

    Nursing process is a scientific approach in the provision of qualified nursing cares. However, in practice, the implementation of this process is faced with numerous challenges. With the knowledge of the challenges associated with the implementation of the nursing process, the nursing processes can be developed appropriately. Due to the lack of comprehensive information on this subject, the current study was carried out to assess the key challenges associated with the implementation of the nursing process. To achieve and review related studies on this field, databases of Iran medix, SID, Magiran, PUBMED, Google scholar, and Proquest were assessed using the main keywords of nursing process and nursing process systematic review. The articles were retrieved in three steps including searching by keywords, review of the proceedings based on inclusion criteria, and final retrieval and assessment of available full texts. Systematic assessment of the articles showed different challenges in implementation of the nursing process. Intangible understanding of the concept of nursing process, different views of the process, lack of knowledge and awareness among nurses related to the execution of process, supports of managing systems, and problems related to recording the nursing process were the main challenges that were extracted from review of literature. On systematically reviewing the literature, intangible understanding of the concept of nursing process has been identified as the main challenge in nursing process. To achieve the best strategy to minimize the challenge, in addition to preparing facilitators for implementation of nursing process, intangible understanding of the concept of nursing process, different views of the process, and forming teams of experts in nursing education are recommended for internalizing the nursing process among nurses.

  20. Challenges associated with the implementation of the nursing process: A systematic review

    PubMed Central

    Zamanzadeh, Vahid; Valizadeh, Leila; Tabrizi, Faranak Jabbarzadeh; Behshid, Mojghan; Lotfi, Mojghan

    2015-01-01

    Background: Nursing process is a scientific approach in the provision of qualified nursing cares. However, in practice, the implementation of this process is faced with numerous challenges. With the knowledge of the challenges associated with the implementation of the nursing process, the nursing processes can be developed appropriately. Due to the lack of comprehensive information on this subject, the current study was carried out to assess the key challenges associated with the implementation of the nursing process. Materials and Methods: To achieve and review related studies on this field, databases of Iran medix, SID, Magiran, PUBMED, Google scholar, and Proquest were assessed using the main keywords of nursing process and nursing process systematic review. The articles were retrieved in three steps including searching by keywords, review of the proceedings based on inclusion criteria, and final retrieval and assessment of available full texts. Results: Systematic assessment of the articles showed different challenges in implementation of the nursing process. Intangible understanding of the concept of nursing process, different views of the process, lack of knowledge and awareness among nurses related to the execution of process, supports of managing systems, and problems related to recording the nursing process were the main challenges that were extracted from review of literature. Conclusions: On systematically reviewing the literature, intangible understanding of the concept of nursing process has been identified as the main challenge in nursing process. To achieve the best strategy to minimize the challenge, in addition to preparing facilitators for implementation of nursing process, intangible understanding of the concept of nursing process, different views of the process, and forming teams of experts in nursing education are recommended for internalizing the nursing process among nurses. PMID:26257793

  1. Automated synthesis of image processing procedures using AI planning techniques

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  2. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  3. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    DOEpatents

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  4. SEMICONDUCTOR TECHNOLOGY A signal processing method for the friction-based endpoint detection system of a CMP process

    NASA Astrophysics Data System (ADS)

    Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang

    2010-12-01

    A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process.

  5. Composite faces are not (necessarily) processed coactively: A test using systems factorial technology and logical-rule models.

    PubMed

    Cheng, Xue Jun; McCarthy, Callum J; Wang, Tony S L; Palmeri, Thomas J; Little, Daniel R

    2018-06-01

    Upright faces are thought to be processed more holistically than inverted faces. In the widely used composite face paradigm, holistic processing is inferred from interference in recognition performance from a to-be-ignored face half for upright and aligned faces compared with inverted or misaligned faces. We sought to characterize the nature of holistic processing in composite faces in computational terms. We use logical-rule models (Fifić, Little, & Nosofsky, 2010) and Systems Factorial Technology (Townsend & Nozawa, 1995) to examine whether composite faces are processed through pooling top and bottom face halves into a single processing channel-coactive processing-which is one common mechanistic definition of holistic processing. By specifically operationalizing holistic processing as the pooling of features into a single decision process in our task, we are able to distinguish it from other processing models that may underlie composite face processing. For instance, a failure of selective attention might result even when top and bottom components of composite faces are processed in serial or in parallel without processing the entire face coactively. Our results show that performance is best explained by a mixture of serial and parallel processing architectures across all 4 upright and inverted, aligned and misaligned face conditions. The results indicate multichannel, featural processing of composite faces in a manner inconsistent with the notion of coactivity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  7. DESIGNING ENVIRONMENTAL, ECONOMIC AND ENERGY EFFICIENT CHEMICAL PROCESSES

    EPA Science Inventory

    The design and improvement of chemical processes can be very challenging. The earlier energy conservation, process economics and environmental aspects are incorporated into the process development, the easier and less expensive it is to alter the process design. Process emissio...

  8. Reversing the conventional leather processing sequence for cleaner leather production.

    PubMed

    Saravanabhavan, Subramani; Thanikaivelan, Palanisamy; Rao, Jonnalagadda Raghava; Nair, Balachandran Unni; Ramasami, Thirumalachari

    2006-02-01

    Conventional leather processing generally involves a combination of single and multistep processes that employs as well as expels various biological, inorganic, and organic materials. It involves nearly 14-15 steps and discharges a huge amount of pollutants. This is primarily due to the fact that conventional leather processing employs a "do-undo" process logic. In this study, the conventional leather processing steps have been reversed to overcome the problems associated with the conventional method. The charges of the skin matrix and of the chemicals and pH profiles of the process have been judiciously used for reversing the process steps. This reversed process eventually avoids several acidification and basification/neutralization steps used in conventional leather processing. The developed process has been validated through various analyses such as chromium content, shrinkage temperature, softness measurements, scanning electron microscopy, and physical testing of the leathers. Further, the performance of the leathers is shown to be on par with conventionally processed leathers through bulk property evaluation. The process enjoys a significant reduction in COD and TS by 53 and 79%, respectively. Water consumption and discharge is reduced by 65 and 64%, respectively. Also, the process benefits from significant reduction in chemicals, time, power, and cost compared to the conventional process.

  9. Group processing in an undergraduate biology course for preservice teachers: Experiences and attitudes

    NASA Astrophysics Data System (ADS)

    Schellenberger, Lauren Brownback

    Group processing is a key principle of cooperative learning in which small groups discuss their strengths and weaknesses and set group goals or norms. However, group processing has not been well-studied at the post-secondary level or from a qualitative or mixed methods perspective. This mixed methods study uses a phenomenological framework to examine the experience of group processing for students in an undergraduate biology course for preservice teachers. The effect of group processing on students' attitudes toward future group work and group processing is also examined. Additionally, this research investigated preservice teachers' plans for incorporating group processing into future lessons. Students primarily experienced group processing as a time to reflect on past performance. Also, students experienced group processing as a time to increase communication among group members and become motivated for future group assignments. Three factors directly influenced students' experiences with group processing: (1) previous experience with group work, (2) instructor interaction, and (3) gender. Survey data indicated that group processing had a slight positive effect on students' attitudes toward future group work and group processing. Participants who were interviewed felt that group processing was an important part of group work and that it had increased their group's effectiveness as well as their ability to work effectively with other people. Participants held positive views on group work prior to engaging in group processing, and group processing did not alter their atittude toward group work. Preservice teachers who were interviewed planned to use group work and a modified group processing protocol in their future classrooms. They also felt that group processing had prepared them for their future professions by modeling effective collaboration and group skills. Based on this research, a new model for group processing has been created which includes extensive instructor interaction and additional group processing sessions. This study offers a new perspective on the phenomenon of group processing and informs science educators and teacher educators on the effective implementation of this important component of small-group learning.

  10. Properties of the Bivariate Delayed Poisson Process

    DTIC Science & Technology

    1974-07-01

    and Lewis (1972) in their Berkeley Symposium paper and here their analysis of the bivariate Poisson processes (without Poisson noise) is carried... Poisson processes . They cannot, however, be independent Poisson processes because their events are associated in pairs by the displace- ment centres...process because its marginal processes for events of each type are themselves (univariate) Poisson processes . Cox and Lewis (1972) assumed a

  11. The Application of Six Sigma Methodologies to University Processes: The Use of Student Teams

    ERIC Educational Resources Information Center

    Pryor, Mildred Golden; Alexander, Christine; Taneja, Sonia; Tirumalasetty, Sowmya; Chadalavada, Deepthi

    2012-01-01

    The first student Six Sigma team (activated under a QEP Process Sub-team) evaluated the course and curriculum approval process. The goal was to streamline the process and thereby shorten process cycle time and reduce confusion about how the process works. Members of this team developed flowcharts on how the process is supposed to work (by…

  12. Impact of Radio Frequency Identification (RFID) on the Marine Corps’ Supply Process

    DTIC Science & Technology

    2006-09-01

    Hypothetical Improvement Using a Real-Time Order Processing System Vice a Batch Order Processing System ................56 3. As-Is: The Current... Processing System Vice a Batch Order Processing System ................58 V. RESULTS ................................................69 A. SIMULATION...Time: Hypothetical Improvement Using a Real-Time Order Processing System Vice a Batch Order Processing System ................71 3. As-Is: The

  13. Global-local processing relates to spatial and verbal processing: implications for sex differences in cognition.

    PubMed

    Pletzer, Belinda; Scheuringer, Andrea; Scherndl, Thomas

    2017-09-05

    Sex differences have been reported for a variety of cognitive tasks and related to the use of different cognitive processing styles in men and women. It was recently argued that these processing styles share some characteristics across tasks, i.e. male approaches are oriented towards holistic stimulus aspects and female approaches are oriented towards stimulus details. In that respect, sex-dependent cognitive processing styles share similarities with attentional global-local processing. A direct relationship between cognitive processing and global-local processing has however not been previously established. In the present study, 49 men and 44 women completed a Navon paradigm and a Kimchi Palmer task as well as a navigation task and a verbal fluency task with the goal to relate the global advantage (GA) effect as a measure of global processing to holistic processing styles in both tasks. Indeed participants with larger GA effects displayed more holistic processing during spatial navigation and phonemic fluency. However, the relationship to cognitive processing styles was modulated by the specific condition of the Navon paradigm, as well as the sex of participants. Thus, different types of global-local processing play different roles for cognitive processing in men and women.

  14. 21 CFR 113.83 - Establishing scheduled processes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... competent processing authorities. If incubation tests are necessary for process confirmation, they shall... instituting the process. The incubation tests for confirmation of the scheduled processes should include the.... Complete records covering all aspects of the establishment of the process and associated incubation tests...

  15. 21 CFR 113.83 - Establishing scheduled processes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... competent processing authorities. If incubation tests are necessary for process confirmation, they shall... instituting the process. The incubation tests for confirmation of the scheduled processes should include the.... Complete records covering all aspects of the establishment of the process and associated incubation tests...

  16. 21 CFR 113.83 - Establishing scheduled processes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... competent processing authorities. If incubation tests are necessary for process confirmation, they shall... instituting the process. The incubation tests for confirmation of the scheduled processes should include the.... Complete records covering all aspects of the establishment of the process and associated incubation tests...

  17. A mathematical study of a random process proposed as an atmospheric turbulence model

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1977-01-01

    A random process is formed by the product of a local Gaussian process and a random amplitude process, and the sum of that product with an independent mean value process. The mathematical properties of the resulting process are developed, including the first and second order properties and the characteristic function of general order. An approximate method for the analysis of the response of linear dynamic systems to the process is developed. The transition properties of the process are also examined.

  18. Standard services for the capture, processing, and distribution of packetized telemetry data

    NASA Technical Reports Server (NTRS)

    Stallings, William H.

    1989-01-01

    Standard functional services for the capture, processing, and distribution of packetized data are discussed with particular reference to the future implementation of packet processing systems, such as those for the Space Station Freedom. The major functions are listed under the following major categories: input processing, packet processing, and output processing. A functional block diagram of a packet data processing facility is presented, showing the distribution of the various processing functions as well as the primary data flow through the facility.

  19. Assessment of hospital processes using a process mining technique: Outpatient process analysis at a tertiary hospital.

    PubMed

    Yoo, Sooyoung; Cho, Minsu; Kim, Eunhye; Kim, Seok; Sim, Yerim; Yoo, Donghyun; Hwang, Hee; Song, Minseok

    2016-04-01

    Many hospitals are increasing their efforts to improve processes because processes play an important role in enhancing work efficiency and reducing costs. However, to date, a quantitative tool has not been available to examine the before and after effects of processes and environmental changes, other than the use of indirect indicators, such as mortality rate and readmission rate. This study used process mining technology to analyze process changes based on changes in the hospital environment, such as the construction of a new building, and to measure the effects of environmental changes in terms of consultation wait time, time spent per task, and outpatient care processes. Using process mining technology, electronic health record (EHR) log data of outpatient care before and after constructing a new building were analyzed, and the effectiveness of the technology in terms of the process was evaluated. Using the process mining technique, we found that the total time spent in outpatient care did not increase significantly compared to that before the construction of a new building, considering that the number of outpatients increased, and the consultation wait time decreased. These results suggest that the operation of the outpatient clinic was effective after changes were implemented in the hospital environment. We further identified improvements in processes using the process mining technique, thereby demonstrating the usefulness of this technique for analyzing complex hospital processes at a low cost. This study confirmed the effectiveness of process mining technology at an actual hospital site. In future studies, the use of process mining technology will be expanded by applying this approach to a larger variety of process change situations. Copyright © 2016. Published by Elsevier Ireland Ltd.

  20. Study of process variables associated with manufacturing hermetically-sealed nickel-cadmium cells

    NASA Technical Reports Server (NTRS)

    Miller, L.

    1974-01-01

    A two year study of the major process variables associated with the manufacturing process for sealed, nickel-cadmium, areospace cells is summarized. Effort was directed toward identifying the major process variables associated with a manufacturing process, experimentally assessing each variable's effect, and imposing the necessary changes (optimization) and controls for the critical process variables to improve results and uniformity. A critical process variable associated with the sintered nickel plaque manufacturing process was identified as the manual forming operation. Critical process variables identified with the positive electrode impregnation/polarization process were impregnation solution temperature, free acid content, vacuum impregnation, and sintered plaque strength. Positive and negative electrodes were identified as a major source of carbonate contamination in sealed cells.

  1. Monitoring autocorrelated process: A geometric Brownian motion process approach

    NASA Astrophysics Data System (ADS)

    Li, Lee Siaw; Djauhari, Maman A.

    2013-09-01

    Autocorrelated process control is common in today's modern industrial process control practice. The current practice of autocorrelated process control is to eliminate the autocorrelation by using an appropriate model such as Box-Jenkins models or other models and then to conduct process control operation based on the residuals. In this paper we show that many time series are governed by a geometric Brownian motion (GBM) process. Therefore, in this case, by using the properties of a GBM process, we only need an appropriate transformation and model the transformed data to come up with the condition needs in traditional process control. An industrial example of cocoa powder production process in a Malaysian company will be presented and discussed to illustrate the advantages of the GBM approach.

  2. Meta-control of combustion performance with a data mining approach

    NASA Astrophysics Data System (ADS)

    Song, Zhe

    Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.

  3. 5 CFR 1653.13 - Processing legal processes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 3 2014-01-01 2014-01-01 false Processing legal processes. 1653.13 Section 1653.13 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD COURT ORDERS AND LEGAL PROCESSES AFFECTING THRIFT SAVINGS PLAN ACCOUNTS Legal Process for the Enforcement of a Participant's Legal...

  4. 5 CFR 1653.13 - Processing legal processes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Processing legal processes. 1653.13 Section 1653.13 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD COURT ORDERS AND LEGAL PROCESSES AFFECTING THRIFT SAVINGS PLAN ACCOUNTS Legal Process for the Enforcement of a Participant's Legal...

  5. A Search Algorithm for Generating Alternative Process Plans in Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Tehrani, Hossein; Sugimura, Nobuhiro; Tanimizu, Yoshitaka; Iwamura, Koji

    Capabilities and complexity of manufacturing systems are increasing and striving for an integrated manufacturing environment. Availability of alternative process plans is a key factor for integration of design, process planning and scheduling. This paper describes an algorithm for generation of alternative process plans by extending the existing framework of the process plan networks. A class diagram is introduced for generating process plans and process plan networks from the viewpoint of the integrated process planning and scheduling systems. An incomplete search algorithm is developed for generating and searching the process plan networks. The benefit of this algorithm is that the whole process plan network does not have to be generated before the search algorithm starts. This algorithm is applicable to large and enormous process plan networks and also to search wide areas of the network based on the user requirement. The algorithm can generate alternative process plans and to select a suitable one based on the objective functions.

  6. PyMS: a Python toolkit for processing of gas chromatography-mass spectrometry (GC-MS) data. Application and comparative study of selected tools.

    PubMed

    O'Callaghan, Sean; De Souza, David P; Isaac, Andrew; Wang, Qiao; Hodkinson, Luke; Olshansky, Moshe; Erwin, Tim; Appelbe, Bill; Tull, Dedreia L; Roessner, Ute; Bacic, Antony; McConville, Malcolm J; Likić, Vladimir A

    2012-05-30

    Gas chromatography-mass spectrometry (GC-MS) is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines. PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX), noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI) fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI), allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS). PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs as well or better than leading software packages. We demonstrate data processing scenarios simple to implement in PyMS, yet difficult to achieve with many conventional GC-MS data processing software. Automated sample processing and quantitation with PyMS can provide substantial time savings compared to more traditional interactive software systems that tightly integrate data processing with the graphical user interface.

  7. Processing mode during repetitive thinking in socially anxious individuals: evidence for a maladaptive experiential mode.

    PubMed

    Wong, Quincy J J; Moulds, Michelle L

    2012-12-01

    Evidence from the depression literature suggests that an analytical processing mode adopted during repetitive thinking leads to maladaptive outcomes relative to an experiential processing mode. To date, in socially anxious individuals, the impact of processing mode during repetitive thinking related to an actual social-evaluative situation has not been investigated. We thus tested whether an analytical processing mode would be maladaptive relative to an experiential processing mode during anticipatory processing and post-event rumination. High and low socially anxious participants were induced to engage in either an analytical or experiential processing mode during: (a) anticipatory processing before performing a speech (Experiment 1; N = 94), or (b) post-event rumination after performing a speech (Experiment 2; N = 74). Mood, cognition, and behavioural measures were employed to examine the effects of processing mode. For high socially anxious participants, the modes had a similar effect on self-reported anxiety during both anticipatory processing and post-event rumination. Unexpectedly, relative to the analytical mode, the experiential mode led to stronger high standard and conditional beliefs during anticipatory processing, and stronger unconditional beliefs during post-event rumination. These experiments are the first to investigate processing mode during anticipatory processing and post-event rumination. Hence, these results are novel and will need to be replicated. These findings suggest that an experiential processing mode is maladaptive relative to an analytical processing mode during repetitive thinking characteristic of socially anxious individuals. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Using process elicitation and validation to understand and improve chemotherapy ordering and delivery.

    PubMed

    Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L

    2012-11-01

    Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.

  9. Modeling interdependencies between business and communication processes in hospitals.

    PubMed

    Brigl, Birgit; Wendt, Thomas; Winter, Alfred

    2003-01-01

    The optimization and redesign of business processes in hospitals is an important challenge for the hospital information management who has to design and implement a suitable HIS architecture. Nevertheless, there are no tools available specializing in modeling information-driven business processes and the consequences on the communication between information processing, tools. Therefore, we will present an approach which facilitates the representation and analysis of business processes and resulting communication processes between application components and their interdependencies. This approach aims not only to visualize those processes, but to also to evaluate if there are weaknesses concerning the information processing infrastructure which hinder the smooth implementation of the business processes.

  10. Life cycle analysis within pharmaceutical process optimization and intensification: case study of active pharmaceutical ingredient production.

    PubMed

    Ott, Denise; Kralisch, Dana; Denčić, Ivana; Hessel, Volker; Laribi, Yosra; Perrichon, Philippe D; Berguerand, Charline; Kiwi-Minsker, Lioubov; Loeb, Patrick

    2014-12-01

    As the demand for new drugs is rising, the pharmaceutical industry faces the quest of shortening development time, and thus, reducing the time to market. Environmental aspects typically still play a minor role within the early phase of process development. Nevertheless, it is highly promising to rethink, redesign, and optimize process strategies as early as possible in active pharmaceutical ingredient (API) process development, rather than later at the stage of already established processes. The study presented herein deals with a holistic life-cycle-based process optimization and intensification of a pharmaceutical production process targeting a low-volume, high-value API. Striving for process intensification by transfer from batch to continuous processing, as well as an alternative catalytic system, different process options are evaluated with regard to their environmental impact to identify bottlenecks and improvement potentials for further process development activities. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. SOI-CMOS Process for Monolithic, Radiation-Tolerant, Science-Grade Imagers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, George; Lee, Adam

    In Phase I, Voxtel worked with Jazz and Sandia to document and simulate the processes necessary to implement a DH-BSI SOI CMOS imaging process. The development is based upon mature SOI CMOS process at both fabs, with the addition of only a few custom processing steps for integration and electrical interconnection of the fully-depleted photodetectors. In Phase I, Voxtel also characterized the Sandia process, including the CMOS7 design rules, and we developed the outline of a process option that included a “BOX etch”, that will permit a “detector in handle” SOI CMOS process to be developed The process flows weremore » developed in cooperation with both Jazz and Sandia process engineers, along with detailed TCAD modeling and testing of the photodiode array architectures. In addition, Voxtel tested the radiation performance of the Jazz’s CA18HJ process, using standard and circular-enclosed transistors.« less

  12. Face to face with emotion: holistic face processing is modulated by emotional state.

    PubMed

    Curby, Kim M; Johnson, Kareem J; Tyson, Alyssa

    2012-01-01

    Negative emotions are linked with a local, rather than global, visual processing style, which may preferentially facilitate feature-based, relative to holistic, processing mechanisms. Because faces are typically processed holistically, and because social contexts are prime elicitors of emotions, we examined whether negative emotions decrease holistic processing of faces. We induced positive, negative, or neutral emotions via film clips and measured holistic processing before and after the induction: participants made judgements about cued parts of chimeric faces, and holistic processing was indexed by the interference caused by task-irrelevant face parts. Emotional state significantly modulated face-processing style, with the negative emotion induction leading to decreased holistic processing. Furthermore, self-reported change in emotional state correlated with changes in holistic processing. These results contrast with general assumptions that holistic processing of faces is automatic and immune to outside influences, and they illustrate emotion's power to modulate socially relevant aspects of visual perception.

  13. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  14. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  15. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  16. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  17. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  18. 20 CFR 405.725 - Effect of expedited appeals process agreement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... PROCESS FOR ADJUDICATING INITIAL DISABILITY CLAIMS Expedited Appeals Process for Constitutional Issues § 405.725 Effect of expedited appeals process agreement. After an expedited appeals process agreement is... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Effect of expedited appeals process agreement...

  19. Common and distinct networks for self-referential and social stimulus processing in the human brain.

    PubMed

    Herold, Dorrit; Spengler, Stephanie; Sajonz, Bastian; Usnich, Tatiana; Bermpohl, Felix

    2016-09-01

    Self-referential processing is a complex cognitive function, involving a set of implicit and explicit processes, complicating investigation of its distinct neural signature. The present study explores the functional overlap and dissociability of self-referential and social stimulus processing. We combined an established paradigm for explicit self-referential processing with an implicit social stimulus processing paradigm in one fMRI experiment to determine the neural effects of self-relatedness and social processing within one study. Overlapping activations were found in the orbitofrontal cortex and in the intermediate part of the precuneus. Stimuli judged as self-referential specifically activated the posterior cingulate cortex, the ventral medial prefrontal cortex, extending into anterior cingulate cortex and orbitofrontal cortex, the dorsal medial prefrontal cortex, the ventral and dorsal lateral prefrontal cortex, the left inferior temporal gyrus, and occipital cortex. Social processing specifically involved the posterior precuneus and bilateral temporo-parietal junction. Taken together, our data show, not only, first, common networks for both processes in the medial prefrontal and the medial parietal cortex, but also, second, functional differentiations for self-referential processing versus social processing: an anterior-posterior gradient for social processing and self-referential processing within the medial parietal cortex and specific activations for self-referential processing in the medial and lateral prefrontal cortex and for social processing in the temporo-parietal junction.

  20. Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement.

    PubMed

    Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B

    2007-03-01

    The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.

  1. Use of Analogies in the Study of Diffusion

    ERIC Educational Resources Information Center

    Letic, Milorad

    2014-01-01

    Emergent processes, such as diffusion, are considered more difficult to understand than direct processes. In physiology, most processes are presented as direct processes, so emergent processes, when encountered, are even more difficult to understand. It has been suggested that, when studying diffusion, misconceptions about random processes are the…

  2. Is Analytic Information Processing a Feature of Expertise in Medicine?

    ERIC Educational Resources Information Center

    McLaughlin, Kevin; Rikers, Remy M.; Schmidt, Henk G.

    2008-01-01

    Diagnosing begins by generating an initial diagnostic hypothesis by automatic information processing. Information processing may stop here if the hypothesis is accepted, or analytical processing may be used to refine the hypothesis. This description portrays analytic processing as an optional extra in information processing, leading us to…

  3. 5 CFR 582.305 - Honoring legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Honoring legal process. 582.305 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Compliance With Legal Process § 582.305 Honoring legal process. (a) The agency shall comply with legal process, except where the process cannot be complied with because: (1) It...

  4. 5 CFR 582.305 - Honoring legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Honoring legal process. 582.305 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Compliance With Legal Process § 582.305 Honoring legal process. (a) The agency shall comply with legal process, except where the process cannot be complied with because: (1) It...

  5. 5 CFR 581.305 - Honoring legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Honoring legal process. 581.305 Section... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Compliance With Process § 581.305 Honoring legal process. (a) The governmental entity shall comply with legal process, except where the process cannot be...

  6. 5 CFR 581.305 - Honoring legal process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Honoring legal process. 581.305 Section... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Compliance With Process § 581.305 Honoring legal process. (a) The governmental entity shall comply with legal process, except where the process cannot be...

  7. 5 CFR 582.305 - Honoring legal process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Honoring legal process. 582.305 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Compliance With Legal Process § 582.305 Honoring legal process. (a) The agency shall comply with legal process, except where the process cannot be complied with because: (1) It...

  8. 5 CFR 581.305 - Honoring legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Honoring legal process. 581.305 Section... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Compliance With Process § 581.305 Honoring legal process. (a) The governmental entity shall comply with legal process, except where the process cannot be...

  9. Articulating the Resources for Business Process Analysis and Design

    ERIC Educational Resources Information Center

    Jin, Yulong

    2012-01-01

    Effective process analysis and modeling are important phases of the business process management lifecycle. When many activities and multiple resources are involved, it is very difficult to build a correct business process specification. This dissertation provides a resource perspective of business processes. It aims at a better process analysis…

  10. An Integrated Model of Emotion Processes and Cognition in Social Information Processing.

    ERIC Educational Resources Information Center

    Lemerise, Elizabeth A.; Arsenio, William F.

    2000-01-01

    Interprets literature on contributions of social cognitive and emotion processes to children's social competence in the context of an integrated model of emotion processes and cognition in social information processing. Provides neurophysiological and functional evidence for the centrality of emotion processes in personal-social decision making.…

  11. Data Processing and First Products from the Hyperspectral Imager for the Coastal Ocean (HICO) on the International Space Station

    DTIC Science & Technology

    2010-04-01

    NRL Stennis Space Center (NRL-SSC) for further processing using the NRL SSC Automated Processing System (APS). APS was developed for processing...have not previously developed automated processing for 73 hyperspectral ocean color data. The hyperspectral processing branch includes several

  12. DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.

    DTIC Science & Technology

    A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the

  13. Management of processes of electrochemical dimensional processing

    NASA Astrophysics Data System (ADS)

    Akhmetov, I. D.; Zakirova, A. R.; Sadykov, Z. B.

    2017-09-01

    In different industries a lot high-precision parts are produced from hard-processed scarce materials. Forming such details can only be acting during non-contact processing, or a minimum of effort, and doable by the use, for example, of electro-chemical processing. At the present stage of development of metal working processes are important management issues electrochemical machining and its automation. This article provides some indicators and factors of electrochemical machining process.

  14. The Hyperspectral Imager for the Coastal Ocean (HICO): Sensor and Data Processing Overview

    DTIC Science & Technology

    2010-01-20

    backscattering coefficients, and others. Several of these software modules will be developed within the Automated Processing System (APS), a data... Automated Processing System (APS) NRL developed APS, which processes satellite data into ocean color data products. APS is a collection of methods...used for ocean color processing which provide the tools for the automated processing of satellite imagery [1]. These tools are in the process of

  15. [Study on culture and philosophy of processing of traditional Chinese medicines].

    PubMed

    Yang, Ming; Zhang, Ding-Kun; Zhong, Ling-Yun; Wang, Fang

    2013-07-01

    According to cultural views and philosophical thoughts, this paper studies the cultural origin, thinking modes, core principles, general regulation and methods of processing, backtracks processing's culture and history which contains generation and deduction process, experienced and promoting process, and core value, summarizes processing's basic principles which are directed by holistic, objective, dynamic, balanced and appropriate thoughts; so as to propagate cultural characteristic and philosophical wisdom of traditional Chinese medicine processing, to promote inheritance and development of processing and to ensure the maximum therapeutic value of Chinese medical clinical.

  16. Containerless automated processing of intermetallic compounds and composites

    NASA Technical Reports Server (NTRS)

    Johnson, D. R.; Joslin, S. M.; Reviere, R. D.; Oliver, B. F.; Noebe, R. D.

    1993-01-01

    An automated containerless processing system has been developed to directionally solidify high temperature materials, intermetallic compounds, and intermetallic/metallic composites. The system incorporates a wide range of ultra-high purity chemical processing conditions. The utilization of image processing for automated control negates the need for temperature measurements for process control. The list of recent systems that have been processed includes Cr, Mo, Mn, Nb, Ni, Ti, V, and Zr containing aluminides. Possible uses of the system, process control approaches, and properties and structures of recently processed intermetallics are reviewed.

  17. A continuous process for the development of Kodak Aerochrome Infrared Film 2443 as a negative

    NASA Astrophysics Data System (ADS)

    Klimes, D.; Ross, D. I.

    1993-02-01

    A process for the continuous dry-to-dry development of Kodak Aerochrome Infrared Film 2443 as a negative (CIR-neg) is described. The process is well suited for production processing of long film lengths. Chemicals from three commercial film processes are used with modifications. Sensitometric procedures are recommended for the monitoring of processing quality control. Sensitometric data and operational aerial exposures indicate that films developed in this process have approximately the same effective aerial film speed as films processed in the reversal process recommended by the manufacturer (Kodak EA-5). The CIR-neg process is useful when aerial photography is acquired for resources management applications which require print reproductions. Originals can be readily reproduced using conventional production equipment (electronic dodging) in black and white or color (color compensation).

  18. Antibiotics with anaerobic ammonium oxidation in urban wastewater treatment

    NASA Astrophysics Data System (ADS)

    Zhou, Ruipeng; Yang, Yuanming

    2017-05-01

    Biofilter process is based on biological oxidation process on the introduction of fast water filter design ideas generated by an integrated filtration, adsorption and biological role of aerobic wastewater treatment process various purification processes. By engineering example, we show that the process is an ideal sewage and industrial wastewater treatment process of low concentration. Anaerobic ammonia oxidation process because of its advantage of the high efficiency and low consumption, wastewater biological denitrification field has broad application prospects. The process in practical wastewater treatment at home and abroad has become a hot spot. In this paper, anammox bacteria habitats and species diversity, and anaerobic ammonium oxidation process in the form of diversity, and one and split the process operating conditions are compared, focusing on a review of the anammox process technology various types of wastewater laboratory research and engineering applications, including general water quality and pressure filtrate sludge digestion, landfill leachate, aquaculture wastewater, monosodium glutamate wastewater, wastewater, sewage, fecal sewage, waste water salinity wastewater characteristics, research progress and application of the obstacles. Finally, we summarize the anaerobic ammonium oxidation process potential problems during the processing of the actual waste water, and proposed future research focus on in-depth study of water quality anammox obstacle factor and its regulatory policy, and vigorously develop on this basis, and combined process optimization.

  19. Understanding scaling through history-dependent processes with collapsing sample space.

    PubMed

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2015-04-28

    History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space-reducing (SSR) processes necessarily lead to Zipf's law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power laws, p(x) ~ x(-λ), where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to which a given process reduces its sample space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from α = 2 to ∞. We discuss several applications showing how SSR processes can be used to understand Zipf's law in word frequencies, and how they are related to diffusion processes in directed networks, or aging processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organized critical processes.

  20. Effects of Processing Parameters on the Forming Quality of C-Shaped Thermosetting Composite Laminates in Hot Diaphragm Forming Process

    NASA Astrophysics Data System (ADS)

    Bian, X. X.; Gu, Y. Z.; Sun, J.; Li, M.; Liu, W. P.; Zhang, Z. G.

    2013-10-01

    In this study, the effects of processing temperature and vacuum applying rate on the forming quality of C-shaped carbon fiber reinforced epoxy resin matrix composite laminates during hot diaphragm forming process were investigated. C-shaped prepreg preforms were produced using a home-made hot diaphragm forming equipment. The thickness variations of the preforms and the manufacturing defects after diaphragm forming process, including fiber wrinkling and voids, were evaluated to understand the forming mechanism. Furthermore, both interlaminar slipping friction and compaction behavior of the prepreg stacks were experimentally analyzed for showing the importance of the processing parameters. In addition, autoclave processing was used to cure the C-shaped preforms to investigate the changes of the defects before and after cure process. The results show that the C-shaped prepreg preforms with good forming quality can be achieved through increasing processing temperature and reducing vacuum applying rate, which obviously promote prepreg interlaminar slipping process. The process temperature and forming rate in hot diaphragm forming process strongly influence prepreg interply frictional force, and the maximum interlaminar frictional force can be taken as a key parameter for processing parameter optimization. Autoclave process is effective in eliminating voids in the preforms and can alleviate fiber wrinkles to a certain extent.

  1. Assessment of Advanced Coal Gasification Processes

    NASA Technical Reports Server (NTRS)

    McCarthy, John; Ferrall, Joseph; Charng, Thomas; Houseman, John

    1981-01-01

    This report represents a technical assessment of the following advanced coal gasification processes: AVCO High Throughput Gasification (HTG) Process; Bell Single-Stage High Mass Flux (HMF) Process; Cities Service/Rockwell (CS/R) Hydrogasification Process; Exxon Catalytic Coal Gasification (CCG) Process. Each process is evaluated for its potential to produce SNG from a bituminous coal. In addition to identifying the new technology these processes represent, key similarities/differences, strengths/weaknesses, and potential improvements to each process are identified. The AVCO HTG and the Bell HMF gasifiers share similarities with respect to: short residence time (SRT), high throughput rate, slagging and syngas as the initial raw product gas. The CS/R Hydrogasifier is also SRT but is non-slagging and produces a raw gas high in methane content. The Exxon CCG gasifier is a long residence time, catalytic, fluidbed reactor producing all of the raw product methane in the gasifier. The report makes the following assessments: 1) while each process has significant potential as coal gasifiers, the CS/R and Exxon processes are better suited for SNG production; 2) the Exxon process is the closest to a commercial level for near-term SNG production; and 3) the SRT processes require significant development including scale-up and turndown demonstration, char processing and/or utilization demonstration, and reactor control and safety features development.

  2. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    PubMed

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  3. Effects of rigor status during high-pressure processing on the physical qualities of farm-raised abalone (Haliotis rufescens).

    PubMed

    Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I

    2015-01-01

    High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P < 0.05) less firm and chewy than pre-rigor processed irrespective of muscle type, processing time, or pressure. L values increased with pressure to 68.9 at 300 MPa for pre-rigor processed foot, 73.8 for post-rigor processed foot, 90.9 for pre-rigor processed adductor, and 89.0 for post-rigor processed adductor. Scanning electron microscopy images showed fraying of collagen fibers in processed adductor, but did not show pressure-induced compaction of the foot myofibrils. Post-rigor processed abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®

  4. PROCESSING ALTERNATIVES FOR DESTRUCTION OF TETRAPHENYLBORATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, D; Thomas Peters, T; Samuel Fink, S

    Two processes were chosen in the 1980's at the Savannah River Site (SRS) to decontaminate the soluble High Level Waste (HLW). The In Tank Precipitation (ITP) process (1,2) was developed at SRS for the removal of radioactive cesium and actinides from the soluble HLW. Sodium tetraphenylborate was added to the waste to precipitate cesium and monosodium titanate (MST) was added to adsorb actinides, primarily uranium and plutonium. Two products of this process were a low activity waste stream and a concentrated organic stream containing cesium tetraphenylborate and actinides adsorbed on monosodium titanate (MST). A copper catalyzed acid hydrolysis process wasmore » built to process (3, 4) the Tank 48H cesium tetraphenylborate waste in the SRS's Defense Waste Processing Facility (DWPF). Operation of the DWPF would have resulted in the production of benzene for incineration in SRS's Consolidated Incineration Facility. This process was abandoned together with the ITP process in 1998 due to high benzene in ITP caused by decomposition of excess sodium tetraphenylborate. Processing in ITP resulted in the production of approximately 1.0 million liters of HLW. SRS has chosen a solvent extraction process combined with adsorption of the actinides to decontaminate the soluble HLW stream (5). However, the waste in Tank 48H is incompatible with existing waste processing facilities. As a result, a processing facility is needed to disposition the HLW in Tank 48H. This paper will describe the process for searching for processing options by SRS task teams for the disposition of the waste in Tank 48H. In addition, attempts to develop a caustic hydrolysis process for in tank destruction of tetraphenylborate will be presented. Lastly, the development of both a caustic and acidic copper catalyzed peroxide oxidation process will be discussed.« less

  5. Manufacturing Process Selection of Composite Bicycle’s Crank Arm using Analytical Hierarchy Process (AHP)

    NASA Astrophysics Data System (ADS)

    Luqman, M.; Rosli, M. U.; Khor, C. Y.; Zambree, Shayfull; Jahidi, H.

    2018-03-01

    Crank arm is one of the important parts in a bicycle that is an expensive product due to the high cost of material and production process. This research is aimed to investigate the potential type of manufacturing process to fabricate composite bicycle crank arm and to describe an approach based on analytical hierarchy process (AHP) that assists decision makers or manufacturing engineers in determining the most suitable process to be employed in manufacturing of composite bicycle crank arm at the early stage of the product development process to reduce the production cost. There are four types of processes were considered, namely resin transfer molding (RTM), compression molding (CM), vacuum bag molding and filament winding (FW). The analysis ranks these four types of process for its suitability in the manufacturing of bicycle crank arm based on five main selection factors and 10 sub factors. Determining the right manufacturing process was performed based on AHP process steps. Consistency test was performed to make sure the judgements are consistent during the comparison. The results indicated that the compression molding was the most appropriate manufacturing process because it has the highest value (33.6%) among the other manufacturing processes.

  6. A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences

    NASA Astrophysics Data System (ADS)

    Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert

    2011-09-01

    Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.

  7. Quantitative analysis of geomorphic processes using satellite image data at different scales

    NASA Technical Reports Server (NTRS)

    Williams, R. S., Jr.

    1985-01-01

    When aerial and satellite photographs and images are used in the quantitative analysis of geomorphic processes, either through direct observation of active processes or by analysis of landforms resulting from inferred active or dormant processes, a number of limitations in the use of such data must be considered. Active geomorphic processes work at different scales and rates. Therefore, the capability of imaging an active or dormant process depends primarily on the scale of the process and the spatial-resolution characteristic of the imaging system. Scale is an important factor in recording continuous and discontinuous active geomorphic processes, because what is not recorded will not be considered or even suspected in the analysis of orbital images. If the geomorphic process of landform change caused by the process is less than 200 m in x to y dimension, then it will not be recorded. Although the scale factor is critical, in the recording of discontinuous active geomorphic processes, the repeat interval of orbital-image acquisition of a planetary surface also is a consideration in order to capture a recurring short-lived geomorphic process or to record changes caused by either a continuous or a discontinuous geomorphic process.

  8. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  9. Microstructure and Texture of Al-2.5wt.%Mg Processed by Combining Accumulative Roll Bonding and Conventional Rolling

    NASA Astrophysics Data System (ADS)

    Gatti, J. R.; Bhattacharjee, P. P.

    2014-12-01

    Evolution of microstructure and texture during severe deformation and annealing was studied in Al-2.5%Mg alloy processed by two different routes, namely, monotonic Accumulative Roll Bonding (ARB) and a hybrid route combining ARB and conventional rolling (CR). For this purpose Al-2.5%Mg sheets were subjected to 5 cycles of monotonic ARB (equivalent strain (ɛeq) = 4.0) processing while in the hybrid route (ARB + CR) 3 cycle ARB-processed sheets were further deformed by conventional rolling to 75% reduction in thickness (ɛeq = 4.0). Although formation of ultrafine structure was observed in the two processing routes, the monotonic ARB—processed material showed finer microstructure but weak texture as compared to the ARB + CR—processed material. After complete recrystallization, the ARB + CR-processed material showed weak cube texture ({001}<100>) but the cube component was almost negligible in the monotonic ARB-processed material-processed material. However, the ND-rotated cube components were stronger in the monotonic ARB-processed material-processed material. The observed differences in the microstructure and texture evolution during deformation and annealing could be explained by the characteristic differences of the two processing routes.

  10. Process Materialization Using Templates and Rules to Design Flexible Process Models

    NASA Astrophysics Data System (ADS)

    Kumar, Akhil; Yao, Wen

    The main idea in this paper is to show how flexible processes can be designed by combining generic process templates and business rules. We instantiate a process by applying rules to specific case data, and running a materialization algorithm. The customized process instance is then executed in an existing workflow engine. We present an architecture and also give an algorithm for process materialization. The rules are written in a logic-based language like Prolog. Our focus is on capturing deeper process knowledge and achieving a holistic approach to robust process design that encompasses control flow, resources and data, as well as makes it easier to accommodate changes to business policy.

  11. HMI conventions for process control graphics.

    PubMed

    Pikaar, Ruud N

    2012-01-01

    Process operators supervise and control complex processes. To enable the operator to do an adequate job, instrumentation and process control engineers need to address several related topics, such as console design, information design, navigation, and alarm management. In process control upgrade projects, usually a 1:1 conversion of existing graphics is proposed. This paper suggests another approach, efficiently leading to a reduced number of new powerful process graphics, supported by a permanent process overview displays. In addition a road map for structuring content (process information) and conventions for the presentation of objects, symbols, and so on, has been developed. The impact of the human factors engineering approach on process control upgrade projects is illustrated by several cases.

  12. A novel processed food classification system applied to Australian food composition databases.

    PubMed

    O'Halloran, S A; Lacy, K E; Grimes, C A; Woods, J; Campbell, K J; Nowson, C A

    2017-08-01

    The extent of food processing can affect the nutritional quality of foodstuffs. Categorising foods by the level of processing emphasises the differences in nutritional quality between foods within the same food group and is likely useful for determining dietary processed food consumption. The present study aimed to categorise foods within Australian food composition databases according to the level of food processing using a processed food classification system, as well as assess the variation in the levels of processing within food groups. A processed foods classification system was applied to food and beverage items contained within Australian Food and Nutrient (AUSNUT) 2007 (n = 3874) and AUSNUT 2011-13 (n = 5740). The proportion of Minimally Processed (MP), Processed Culinary Ingredients (PCI) Processed (P) and Ultra Processed (ULP) by AUSNUT food group and the overall proportion of the four processed food categories across AUSNUT 2007 and AUSNUT 2011-13 were calculated. Across the food composition databases, the overall proportions of foods classified as MP, PCI, P and ULP were 27%, 3%, 26% and 44% for AUSNUT 2007 and 38%, 2%, 24% and 36% for AUSNUT 2011-13. Although there was wide variation in the classifications of food processing within the food groups, approximately one-third of foodstuffs were classified as ULP food items across both the 2007 and 2011-13 AUSNUT databases. This Australian processed food classification system will allow researchers to easily quantify the contribution of processed foods within the Australian food supply to assist in assessing the nutritional quality of the dietary intake of population groups. © 2017 The British Dietetic Association Ltd.

  13. Process and domain specificity in regions engaged for face processing: an fMRI study of perceptual differentiation.

    PubMed

    Collins, Heather R; Zhu, Xun; Bhatt, Ramesh S; Clark, Jonathan D; Joseph, Jane E

    2012-12-01

    The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. This study parametrically varied demands on featural, first-order configural, or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing), or reflected generalized perceptual differentiation (i.e., differentiation that crosses category and processing type boundaries). ROIs were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories.

  14. Process- and Domain-Specificity in Regions Engaged for Face Processing: An fMRI Study of Perceptual Differentiation

    PubMed Central

    Collins, Heather R.; Zhu, Xun; Bhatt, Ramesh S.; Clark, Jonathan D.; Joseph, Jane E.

    2015-01-01

    The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. The present study parametrically varied demands on featural, first-order configural or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing) or reflected generalized perceptual differentiation (i.e. differentiation that crosses category and processing type boundaries). Regions of interest were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process-specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex, and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain-specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories. PMID:22849402

  15. Achieving Continuous Manufacturing for Final Dosage Formation: Challenges and How to Meet Them May 20-21 2014 Continuous Manufacturing Symposium.

    PubMed

    Byrn, Stephen; Futran, Maricio; Thomas, Hayden; Jayjock, Eric; Maron, Nicola; Meyer, Robert F; Myerson, Allan S; Thien, Michael P; Trout, Bernhardt L

    2015-03-01

    We describe the key issues and possibilities for continuous final dosage formation, otherwise known as downstream processing or drug product manufacturing. A distinction is made between heterogeneous processing and homogeneous processing, the latter of which is expected to add more value to continuous manufacturing. We also give the key motivations for moving to continuous manufacturing, some of the exciting new technologies, and the barriers to implementation of continuous manufacturing. Continuous processing of heterogeneous blends is the natural first step in converting existing batch processes to continuous. In heterogeneous processing, there are discrete particles that can segregate, versus in homogeneous processing, components are blended and homogenized such that they do not segregate. Heterogeneous processing can incorporate technologies that are closer to existing technologies, where homogeneous processing necessitates the development and incorporation of new technologies. Homogeneous processing has the greatest potential for reaping the full rewards of continuous manufacturing, but it takes long-term vision and a more significant change in process development than heterogeneous processing. Heterogeneous processing has the detriment that, as the technologies are adopted rather than developed, there is a strong tendency to incorporate correction steps, what we call below "The Rube Goldberg Problem." Thus, although heterogeneous processing will likely play a major role in the near-term transformation of heterogeneous to continuous processing, it is expected that homogeneous processing is the next step that will follow. Specific action items for industry leaders are. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  16. An Analysis of the Air Force Government Operated Civil Engineering Supply Store Logistic System: How Can It Be Improved?

    DTIC Science & Technology

    1990-09-01

    6 Logistics Systems ............ 7 GOCESS Operation . . . . . . . ..... 9 Work Order Processing . . . . ... 12 Job Order Processing . . . . . . . . . . 14...orders and job orders to the Material Control Section will be discussed separately. Work Order Processing . Figure 2 illustrates typical WO processing...logistics function. The JO processing is similar. Job Order Processing . Figure 3 illustrates typical JO processing in a GOCESS operation. As with WOs, this

  17. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  18. Data processing system for the Sneg-2MP experiment

    NASA Technical Reports Server (NTRS)

    Gavrilova, Y. A.

    1980-01-01

    The data processing system for scientific experiments on stations of the "Prognoz" type provides for the processing sequence to be broken down into a number of consecutive stages: preliminary processing, primary processing, secondary processing. The tasks of each data processing stage are examined for an experiment designed to study gamma flashes of galactic origin and solar flares lasting from several minutes to seconds in the 20 kev to 1000 kev energy range.

  19. General RMP Guidance - Appendix D: OSHA Guidance on PSM

    EPA Pesticide Factsheets

    OSHA's Process Safety Management (PSM) Guidance on providing complete and accurate written information concerning process chemicals, process technology, and process equipment; including process hazard analysis and material safety data sheets.

  20. Elaboration Likelihood and the Counseling Process: The Role of Affect.

    ERIC Educational Resources Information Center

    Stoltenberg, Cal D.; And Others

    The role of affect in counseling has been examined from several orientations. The depth of processing model views the efficiency of information processing as a function of the extent to which the information is processed. The notion of cognitive processing capacity states that processing information at deeper levels engages more of one's limited…

  1. 5 CFR 582.202 - Service of legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Service of legal process. 582.202 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.202 Service of legal process. (a) A person using this part shall serve interrogatories and legal process on the agent to receive process as...

  2. 5 CFR 582.202 - Service of legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Service of legal process. 582.202 Section... GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.202 Service of legal process. (a) A person using this part shall serve interrogatories and legal process on the agent to receive process as...

  3. Information Processing Concepts: A Cure for "Technofright." Information Processing in the Electronic Office. Part 1: Concepts.

    ERIC Educational Resources Information Center

    Popyk, Marilyn K.

    1986-01-01

    Discusses the new automated office and its six major technologies (data processing, word processing, graphics, image, voice, and networking), the information processing cycle (input, processing, output, distribution/communication, and storage and retrieval), ergonomics, and ways to expand office education classes (versus class instruction). (CT)

  4. Facial Speech Gestures: The Relation between Visual Speech Processing, Phonological Awareness, and Developmental Dyslexia in 10-Year-Olds

    ERIC Educational Resources Information Center

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.

    2016-01-01

    Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…

  5. 40 CFR 65.62 - Process vent group determination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., or Group 2B) for each process vent. Group 1 process vents require control, and Group 2A and 2B... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Process vent group determination. 65... (CONTINUED) CONSOLIDATED FEDERAL AIR RULE Process Vents § 65.62 Process vent group determination. (a) Group...

  6. 40 CFR 63.138 - Process wastewater provisions-performance standards for treatment processes managing Group 1...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .../or Table 9 compounds are similar and often identical. (3) Biological treatment processes. Biological treatment processes in compliance with this section may be either open or closed biological treatment processes as defined in § 63.111. An open biological treatment process in compliance with this section need...

  7. 5 CFR 581.202 - Service of process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Service of process. 581.202 Section 581... GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Service of Process § 581.202 Service of process. (a) A... facilitate proper service of process on its designated agent(s). If legal process is not directed to any...

  8. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  9. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  10. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 3 2013-07-01 2013-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  11. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  12. 30 CFR 828.11 - In situ processing: Performance standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 3 2014-07-01 2014-07-01 false In situ processing: Performance standards. 828... STANDARDS-IN SITU PROCESSING § 828.11 In situ processing: Performance standards. (a) The person who conducts in situ processing activities shall comply with 30 CFR 817 and this section. (b) In situ processing...

  13. Processing Depth, Elaboration of Encoding, Memory Stores, and Expended Processing Capacity.

    ERIC Educational Resources Information Center

    Eysenck, Michael W.; Eysenck, M. Christine

    1979-01-01

    The effects of several factors on expended processing capacity were measured. Expended processing capacity was greater when information was retrieved from secondary memory than from primary memory, when processing was of a deep, semantic nature than when it was shallow and physical, and when processing was more elaborate. (Author/GDC)

  14. Speed isn’t everything: Complex processing speed measures mask individual differences and developmental changes in executive control

    PubMed Central

    Cepeda, Nicholas J.; Blackwell, Katharine A.; Munakata, Yuko

    2012-01-01

    The rate at which people process information appears to influence many aspects of cognition across the lifespan. However, many commonly accepted measures of “processing speed” may require goal maintenance, manipulation of information in working memory, and decision-making, blurring the distinction between processing speed and executive control and resulting in overestimation of processing-speed contributions to cognition. This concern may apply particularly to studies of developmental change, as even seemingly simple processing speed measures may require executive processes to keep children and older adults on task. We report two new studies and a re-analysis of a published study, testing predictions about how different processing speed measures influence conclusions about executive control across the life span. We find that the choice of processing speed measure affects the relationship observed between processing speed and executive control, in a manner that changes with age, and that choice of processing speed measure affects conclusions about development and the relationship among executive control measures. Implications for understanding processing speed, executive control, and their development are discussed. PMID:23432836

  15. The impact of working memory and the “process of process modelling” on model quality: Investigating experienced versus inexperienced modellers

    NASA Astrophysics Data System (ADS)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R.; Weber, Barbara

    2016-05-01

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  16. A new class of random processes with application to helicopter noise

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.; Miamee, A. G.

    1989-01-01

    The concept of dividing random processes into classes (e.g., stationary, locally stationary, periodically correlated, and harmonizable) has long been employed. A new class of random processes is introduced which includes many of these processes as well as other interesting processes which fall into none of the above classes. Such random processes are denoted as linearly correlated. This class is shown to include the familiar stationary and periodically correlated processes as well as many other, both harmonizable and non-harmonizable, nonstationary processes. When a process is linearly correlated for all t and harmonizable, its two-dimensional power spectral density S(x) (omega 1, omega 2) is shown to take a particularly simple form, being non-zero only on lines such that omega 1 to omega 2 = + or - r(k) where the r(k's) are (not necessarily equally spaced) roots of a characteristic function. The relationship of such processes to the class of stationary processes is examined. In addition, the application of such processes in the analysis of typical helicopter noise signals is described.

  17. The impact of working memory and the “process of process modelling” on model quality: Investigating experienced versus inexperienced modellers

    PubMed Central

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R.; Weber, Barbara

    2016-01-01

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling. PMID:27157858

  18. A new class of random processes with application to helicopter noise

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.; Miamee, A. G.

    1989-01-01

    The concept of dividing random processes into classes (e.g., stationary, locally stationary, periodically correlated, and harmonizable) has long been employed. A new class of random processes is introduced which includes many of these processes as well as other interesting processes which fall into none of the above classes. Such random processes are denoted as linearly correlated. This class is shown to include the familiar stationary and periodically correlated processes as well as many other, both harmonizable and non-harmonizable, nonstationary processes. When a process is linearly correlated for all t and harmonizable, its two-dimensional power spectral density S(x)(omega 1, omega 2) is shown to take a particularly simple form, being non-zero only on lines such that omega 1 to omega 2 = + or - r(k) where the r(k's) are (not necessarily equally spaced) roots of a characteristic function. The relationship of such processes to the class of stationary processes is examined. In addition, the application of such processes in the analysis of typical helicopter noise signals is described.

  19. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers.

    PubMed

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara

    2016-05-09

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  20. Rapid Automatized Naming in Children with Dyslexia: Is Inhibitory Control Involved?

    PubMed

    Bexkens, Anika; van den Wildenberg, Wery P M; Tijms, Jurgen

    2015-08-01

    Rapid automatized naming (RAN) is widely seen as an important indicator of dyslexia. The nature of the cognitive processes involved in rapid naming is however still a topic of controversy. We hypothesized that in addition to the involvement of phonological processes and processing speed, RAN is a function of inhibition processes, in particular of interference control. A total 86 children with dyslexia and 31 normal readers were recruited. Our results revealed that in addition to phonological processing and processing speed, interference control predicts rapid naming in dyslexia, but in contrast to these other two cognitive processes, inhibition is not significantly associated with their reading and spelling skills. After variance in reading and spelling associated with processing speed, interference control and phonological processing was partialled out, naming speed was no longer consistently associated with the reading and spelling skills of children with dyslexia. Finally, dyslexic children differed from normal readers on naming speed, literacy skills, phonological processing and processing speed, but not on inhibition processes. Both theoretical and clinical interpretations of these results are discussed. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Feasibility of using continuous chromatography in downstream processing: Comparison of costs and product quality for a hybrid process vs. a conventional batch process.

    PubMed

    Ötes, Ozan; Flato, Hendrik; Winderl, Johannes; Hubbuch, Jürgen; Capito, Florian

    2017-10-10

    The protein A capture step is the main cost-driver in downstream processing, with high attrition costs especially when using protein A resin not until end of resin lifetime. Here we describe a feasibility study, transferring a batch downstream process to a hybrid process, aimed at replacing batch protein A capture chromatography with a continuous capture step, while leaving the polishing steps unchanged to minimize required process adaptations compared to a batch process. 35g of antibody were purified using the hybrid approach, resulting in comparable product quality and step yield compared to the batch process. Productivity for the protein A step could be increased up to 420%, reducing buffer amounts by 30-40% and showing robustness for at least 48h continuous run time. Additionally, to enable its potential application in a clinical trial manufacturing environment cost of goods were compared for the protein A step between hybrid process and batch process, showing a 300% cost reduction, depending on processed volumes and batch cycles. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Non-Conscious Perception of Emotions in Psychiatric Disorders: The Unsolved Puzzle of Psychopathology.

    PubMed

    Lee, Seung A; Kim, Chai-Youn; Lee, Seung-Hwan

    2016-03-01

    Psychophysiological and functional neuroimaging studies have frequently and consistently shown that emotional information can be processed outside of the conscious awareness. Non-conscious processing comprises automatic, uncontrolled, and fast processing that occurs without subjective awareness. However, how such non-conscious emotional processing occurs in patients with various psychiatric disorders requires further examination. In this article, we reviewed and discussed previous studies on the non-conscious emotional processing in patients diagnosed with anxiety disorder, schizophrenia, bipolar disorder, and depression, to further understand how non-conscious emotional processing varies across these psychiatric disorders. Although the symptom profile of each disorder does not often overlap with one another, these patients commonly show abnormal emotional processing based on the pathology of their mood and cognitive function. This indicates that the observed abnormalities of emotional processing in certain social interactions may derive from a biased mood or cognition process that precedes consciously controlled and voluntary processes. Since preconscious forms of emotional processing appear to have a major effect on behaviour and cognition in patients with these disorders, further investigation is required to understand these processes and their impact on patient pathology.

  3. Empirical evaluation of the Process Overview Measure for assessing situation awareness in process plants.

    PubMed

    Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd

    2016-03-01

    The Process Overview Measure is a query-based measure developed to assess operator situation awareness (SA) from monitoring process plants. A companion paper describes how the measure has been developed according to process plant properties and operator cognitive work. The Process Overview Measure demonstrated practicality, sensitivity, validity and reliability in two full-scope simulator experiments investigating dramatically different operational concepts. Practicality was assessed based on qualitative feedback of participants and researchers. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA in full-scope simulator settings based on data collected on process experts. Thus, full-scope simulator studies can employ the Process Overview Measure to reveal the impact of new control room technology and operational concepts on monitoring process plants. Practitioner Summary: The Process Overview Measure is a query-based measure that demonstrated practicality, sensitivity, validity and reliability for assessing operator situation awareness (SA) from monitoring process plants in representative settings.

  4. A Framework for Business Process Change Requirements Analysis

    NASA Astrophysics Data System (ADS)

    Grover, Varun; Otim, Samuel

    The ability to quickly and continually adapt business processes to accommodate evolving requirements and opportunities is critical for success in competitive environments. Without appropriate linkage between redesign decisions and strategic inputs, identifying processes that need to be modified will be difficult. In this paper, we draw attention to the analysis of business process change requirements in support of process change initiatives. Business process redesign is a multifaceted phenomenon involving processes, organizational structure, management systems, human resource architecture, and many other aspects of organizational life. To be successful, the business process initiative should focus not only on identifying the processes to be redesigned, but also pay attention to various enablers of change. Above all, a framework is just a blueprint; management must lead change. We hope our modest contribution will draw attention to the broader framing of requirements for business process change.

  5. Process Analytical Technology (PAT): batch-to-batch reproducibility of fermentation processes by robust process operational design and control.

    PubMed

    Gnoth, S; Jenzsch, M; Simutis, R; Lübbert, A

    2007-10-31

    The Process Analytical Technology (PAT) initiative of the FDA is a reaction on the increasing discrepancy between current possibilities in process supervision and control of pharmaceutical production processes and its current application in industrial manufacturing processes. With rigid approval practices based on standard operational procedures, adaptations of production reactors towards the state of the art were more or less inhibited for long years. Now PAT paves the way for continuous process and product improvements through improved process supervision based on knowledge-based data analysis, "Quality-by-Design"-concepts, and, finally, through feedback control. Examples of up-to-date implementations of this concept are presented. They are taken from one key group of processes in recombinant pharmaceutical protein manufacturing, the cultivations of genetically modified Escherichia coli bacteria.

  6. When teams shift among processes: insights from simulation and optimization.

    PubMed

    Kennedy, Deanna M; McComb, Sara A

    2014-09-01

    This article introduces process shifts to study the temporal interplay among transition and action processes espoused in the recurring phase model proposed by Marks, Mathieu, and Zacarro (2001). Process shifts are those points in time when teams complete a focal process and change to another process. By using team communication patterns to measure process shifts, this research explores (a) when teams shift among different transition processes and initiate action processes and (b) the potential of different interventions, such as communication directives, to manipulate process shift timing and order and, ultimately, team performance. Virtual experiments are employed to compare data from observed laboratory teams not receiving interventions, simulated teams receiving interventions, and optimal simulated teams generated using genetic algorithm procedures. Our results offer insights about the potential for different interventions to affect team performance. Moreover, certain interventions may promote discussions about key issues (e.g., tactical strategies) and facilitate shifting among transition processes in a manner that emulates optimal simulated teams' communication patterns. Thus, we contribute to theory regarding team processes in 2 important ways. First, we present process shifts as a way to explore the timing of when teams shift from transition to action processes. Second, we use virtual experimentation to identify those interventions with the greatest potential to affect performance by changing when teams shift among processes. Additionally, we employ computational methods including neural networks, simulation, and optimization, thereby demonstrating their applicability in conducting team research. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  7. Nitrous oxide and methane emissions from different treatment processes in full-scale municipal wastewater treatment plants.

    PubMed

    Rena, Y G; Wang, J H; Li, H F; Zhang, J; Qi, P Y; Hu, Z

    2013-01-01

    Nitrous oxide (N2O) and methane (CH4) are two important greenhouse gases (GHG) emitted from biological nutrient removal (BNR) processes in municipal wastewater treatment plants (WWTP). In this study, three typical biological wastewater treatment processes were studied in WWTP of Northern China: pre-anaerobic carrousel oxidation ditch (A+OD) process, pre-anoxic anaerobic-anoxic-oxic (A-A/ A/O) process and reverse anaerobic-anoxic-oxic (r-A/ A/O) process. The N2O and CH4 emissions from these three different processes were measured in every processing unit of each WWTP. Results showed that N2O and CH4 were mainly discharged during the nitrification/denitrification process and the anaerobic/anoxic treatment process, respectively and the amounts of their formation and release were significantly influenced by different BNR processes implemented in these WWTP. The N2O conversion ratio of r-A/ A/O process was the lowest among the three WWTP, which were 10.9% and 18.6% lower than that of A-A/A/O process and A+OD process, respectively. Similarly, the CH4 conversion ratio of r-A/ A/O process was the lowest among the three WWTP, which were 89. I% and 80.8% lower than that of A-A/ A/O process and A+OD process, respectively. The factors influencing N2O and CH4 formation and emission in the three WWTP were investigated to explain the difference between these processes. The nitrite concentration and oxidation-reduction potential (ORP) value were found to be the dominant influencing factors affecting N2O and CH4 production, respectively. The flow-based emission factors of N2O and CH4 of the WWTP were figured out for better quantification of GHG emissions and further technical assessments of mitigation options.

  8. Effects of children's working memory capacity and processing speed on their sentence imitation performance.

    PubMed

    Poll, Gerard H; Miller, Carol A; Mainela-Arnold, Elina; Adams, Katharine Donnelly; Misra, Maya; Park, Ji Sook

    2013-01-01

    More limited working memory capacity and slower processing for language and cognitive tasks are characteristics of many children with language difficulties. Individual differences in processing speed have not consistently been found to predict language ability or severity of language impairment. There are conflicting views on whether working memory and processing speed are integrated or separable abilities. To evaluate four models for the relations of individual differences in children's processing speed and working memory capacity in sentence imitation. The models considered whether working memory and processing speed are integrated or separable, as well as the effect of the number of operations required per sentence. The role of working memory as a mediator of the effect of processing speed on sentence imitation was also evaluated. Forty-six children with varied language and reading abilities imitated sentences. Working memory was measured with the Competing Language Processing Task (CLPT), and processing speed was measured with a composite of truth-value judgment and rapid automatized naming tasks. Mixed-effects ordinal regression models evaluated the CLPT and processing speed as predictors of sentence imitation item scores. A single mediator model evaluated working memory as a mediator of the effect of processing speed on sentence imitation total scores. Working memory was a reliable predictor of sentence imitation accuracy, but processing speed predicted sentence imitation only as a component of a processing speed by number of operations interaction. Processing speed predicted working memory capacity, and there was evidence that working memory acted as a mediator of the effect of processing speed on sentence imitation accuracy. The findings support a refined view of working memory and processing speed as separable factors in children's sentence imitation performance. Processing speed does not independently explain sentence imitation accuracy for all sentence types, but contributes when the task requires more mental operations. Processing speed also has an indirect effect on sentence imitation by contributing to working memory capacity. © 2013 Royal College of Speech and Language Therapists.

  9. Q-marker based strategy for CMC research of Chinese medicine: A case study of Panax Notoginseng saponins.

    PubMed

    Zhong, Yi; Zhu, Jieqiang; Yang, Zhenzhong; Shao, Qing; Fan, Xiaohui; Cheng, Yiyu

    2018-01-31

    To ensure pharmaceutical quality, chemistry, manufacturing and control (CMC) research is essential. However, due to the inherent complexity of Chinese medicine (CM), CMC study of CM remains a great challenge for academia, industry, and regulatory agencies. Recently, quality-marker (Q-marker) was proposed to establish quality standards or quality analysis approaches of Chinese medicine, which sheds a light on Chinese medicine's CMC study. Here manufacture processes of Panax Notoginseng Saponins (PNS) is taken as a case study and the present work is to establish a Q-marker based research strategy for CMC of Chinese medicine. The Q-markers of Panax Notoginseng Saponins (PNS) is selected and established by integrating chemical profile with pharmacological activities. Then, the key processes of PNS manufacturing are identified by material flow analysis. Furthermore, modeling algorithms are employed to explore the relationship between Q-markers and critical process parameters (CPPs) of the key processes. At last, CPPs of the key processes are optimized in order to improving the process efficiency. Among the 97 identified compounds, Notoginsenoside R 1 , ginsenoside Rg 1 , Re, Rb 1 and Rd are selected as the Q-markers of PNS. Our analysis on PNS manufacturing show the extraction process and column chromatography process are the key processes. With the CPPs of each process as the inputs and Q-markers' contents as the outputs, two process prediction models are built separately for the extraction process and column chromatography process of Panax notoginseng, which both possess good prediction ability. Based on the efficiency models of extraction process and column chromatography process we constructed, the optimal CPPs of both processes are calculated. Our results show that the Q-markers derived from CMC research strategy can be applied to analyze the manufacturing processes of Chinese medicine to assure product's quality and promote key processes' efficiency simultaneously. Copyright © 2018 Elsevier GmbH. All rights reserved.

  10. PyMS: a Python toolkit for processing of gas chromatography-mass spectrometry (GC-MS) data. Application and comparative study of selected tools

    PubMed Central

    2012-01-01

    Background Gas chromatography–mass spectrometry (GC-MS) is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines. Results PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX), noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI) fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI), allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS). Conclusions PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs as well or better than leading software packages. We demonstrate data processing scenarios simple to implement in PyMS, yet difficult to achieve with many conventional GC-MS data processing software. Automated sample processing and quantitation with PyMS can provide substantial time savings compared to more traditional interactive software systems that tightly integrate data processing with the graphical user interface. PMID:22647087

  11. The Research Process on Converter Steelmaking Process by Using Limestone

    NASA Astrophysics Data System (ADS)

    Tang, Biao; Li, Xing-yi; Cheng, Han-chi; Wang, Jing; Zhang, Yun-long

    2017-08-01

    Compared with traditional converter steelmaking process, steelmaking process with limestone uses limestone to replace lime partly. A lot of researchers have studied about the new steelmaking process. There are much related research about material balance calculation, the behaviour of limestone in the slag, limestone powder injection in converter and application of limestone in iron and steel enterprises. The results show that the surplus heat of converter can meet the need of the limestone calcination, and the new process can reduce the steelmaking process energy loss in the whole steelmaking process, reduce carbon dioxide emissions, and improve the quality of the gas.

  12. Working on the Boundaries: Philosophies and Practices of the Design Process

    NASA Technical Reports Server (NTRS)

    Ryan, R.; Blair, J.; Townsend, J.; Verderaime, V.

    1996-01-01

    While systems engineering process is a program formal management technique and contractually binding, the design process is the informal practice of achieving the design project requirements throughout all design phases of the systems engineering process. The design process and organization are systems and component dependent. Informal reviews include technical information meetings and concurrent engineering sessions, and formal technical discipline reviews are conducted through the systems engineering process. This paper discusses and references major philosophical principles in the design process, identifies its role in interacting systems and disciplines analyses and integrations, and illustrates the process application in experienced aerostructural designs.

  13. Chemical processing of lunar materials

    NASA Technical Reports Server (NTRS)

    Criswell, D. R.; Waldron, R. D.

    1979-01-01

    The paper highlights recent work on the general problem of processing lunar materials. The discussion covers lunar source materials, refined products, motivations for using lunar materials, and general considerations for a lunar or space processing plant. Attention is given to chemical processing through various techniques, including electrolysis of molten silicates, carbothermic/silicothermic reduction, carbo-chlorination process, NaOH basic-leach process, and HF acid-leach process. Several options for chemical processing of lunar materials are well within the state of the art of applied chemistry and chemical engineering to begin development based on the extensive knowledge of lunar materials.

  14. Coordination and organization of security software process for power information application environment

    NASA Astrophysics Data System (ADS)

    Wang, Qiang

    2017-09-01

    As an important part of software engineering, the software process decides the success or failure of software product. The design and development feature of security software process is discussed, so is the necessity and the present significance of using such process. Coordinating the function software, the process for security software and its testing are deeply discussed. The process includes requirement analysis, design, coding, debug and testing, submission and maintenance. In each process, the paper proposed the subprocesses to support software security. As an example, the paper introduces the above process into the power information platform.

  15. Sensor-based atomic layer deposition for rapid process learning and enhanced manufacturability

    NASA Astrophysics Data System (ADS)

    Lei, Wei

    In the search for sensor based atomic layer deposition (ALD) process to accelerate process learning and enhance manufacturability, we have explored new reactor designs and applied in-situ process sensing to W and HfO 2 ALD processes. A novel wafer scale ALD reactor, which features fast gas switching, good process sensing compatibility and significant similarity to the real manufacturing environment, is constructed. The reactor has a unique movable reactor cap design that allows two possible operation modes: (1) steady-state flow with alternating gas species; or (2) fill-and-pump-out cycling of each gas, accelerating the pump-out by lifting the cap to employ the large chamber volume as ballast. Downstream quadrupole mass spectrometry (QMS) sampling is applied for in-situ process sensing of tungsten ALD process. The QMS reveals essential surface reaction dynamics through real-time signals associated with byproduct generation as well as precursor introduction and depletion for each ALD half cycle, which are then used for process learning and optimization. More subtle interactions such as imperfect surface saturation and reactant dose interaction are also directly observed by QMS, indicating that ALD process is more complicated than the suggested layer-by-layer growth. By integrating in real-time the byproduct QMS signals over each exposure and plotting it against process cycle number, the deposition kinetics on the wafer is directly measured. For continuous ALD runs, the total integrated byproduct QMS signal in each ALD run is also linear to ALD film thickness, and therefore can be used for ALD film thickness metrology. The in-situ process sensing is also applied to HfO2 ALD process that is carried out in a furnace type ALD reactor. Precursor dose end-point control is applied to precisely control the precursor dose in each half cycle. Multiple process sensors, including quartz crystal microbalance (QCM) and QMS are used to provide real time process information. The sensing results confirm the proposed surface reaction path and once again reveal the complexity of ALD processes. The impact of this work includes: (1) It explores new ALD reactor designs which enable the implementation of in-situ process sensors for rapid process learning and enhanced manufacturability; (2) It demonstrates in the first time that in-situ QMS can reveal detailed process dynamics and film growth kinetics in wafer-scale ALD process, and thus can be used for ALD film thickness metrology. (3) Based on results from two different processes carried out in two different reactors, it is clear that ALD is a more complicated process than normally believed or advertised, but real-time observation of the operational chemistries in ALD by in-situ sensors provides critical insight to the process and the basis for more effective process control for ALD applications.

  16. Implicit Processes, Self-Regulation, and Interventions for Behavior Change.

    PubMed

    St Quinton, Tom; Brunton, Julie A

    2017-01-01

    The ability to regulate and subsequently change behavior is influenced by both reflective and implicit processes. Traditional theories have focused on conscious processes by highlighting the beliefs and intentions that influence decision making. However, their success in changing behavior has been modest with a gap between intention and behavior apparent. Dual-process models have been recently applied to health psychology; with numerous models incorporating implicit processes that influence behavior as well as the more common conscious processes. Such implicit processes are theorized to govern behavior non-consciously. The article provides a commentary on motivational and volitional processes and how interventions have combined to attempt an increase in positive health behaviors. Following this, non-conscious processes are discussed in terms of their theoretical underpinning. The article will then highlight how these processes have been measured and will then discuss the different ways that the non-conscious and conscious may interact. The development of interventions manipulating both processes may well prove crucial in successfully altering behavior.

  17. All varieties of encoding variability are not created equal: Separating variable processing from variable tasks

    PubMed Central

    Huff, Mark J.; Bodner, Glen E.

    2014-01-01

    Whether encoding variability facilitates memory is shown to depend on whether item-specific and relational processing are both performed across study blocks, and whether study items are weakly versus strongly related. Variable-processing groups studied a word list once using an item-specific task and once using a relational task. Variable-task groups’ two different study tasks recruited the same type of processing each block. Repeated-task groups performed the same study task each block. Recall and recognition were greatest in the variable-processing group, but only with weakly related lists. A variable-processing benefit was also found when task-based processing and list-type processing were complementary (e.g., item-specific processing of a related list) rather than redundant (e.g., relational processing of a related list). That performing both item-specific and relational processing across trials, or within a trial, yields encoding-variability benefits may help reconcile decades of contradictory findings in this area. PMID:25018583

  18. Continuous welding of unidirectional fiber reinforced thermoplastic tape material

    NASA Astrophysics Data System (ADS)

    Schledjewski, Ralf

    2017-10-01

    Continuous welding techniques like thermoplastic tape placement with in situ consolidation offer several advantages over traditional manufacturing processes like autoclave consolidation, thermoforming, etc. However, still there is a need to solve several important processing issues before it becomes a viable economic process. Intensive process analysis and optimization has been carried out in the past through experimental investigation, model definition and simulation development. Today process simulation is capable to predict resulting consolidation quality. Effects of material imperfections or process parameter variations are well known. But using this knowledge to control the process based on online process monitoring and according adaption of the process parameters is still challenging. Solving inverse problems and using methods for automated code generation allowing fast implementation of algorithms on targets are required. The paper explains the placement technique in general. Process-material-property-relationships and typical material imperfections are described. Furthermore, online monitoring techniques and how to use them for a model based process control system are presented.

  19. Economics of polysilicon process: A view from Japan

    NASA Technical Reports Server (NTRS)

    Shimizu, Y.

    1986-01-01

    The production process of solar grade silicon (SOG-Si) through trichlorosilane (TCS) was researched in a program sponsored by New Energy Development Organization (NEDO). The NEDO process consists of the following two steps: TCS production from by-product silicon tetrachloride (STC) and SOG-Si formation from TCS using a fluidized bed reactor. Based on the data obtained during the research program, the manufacturing cost of the NEDO process and other polysilicon manufacturing processes were compared. The manufacturing cost was calculated on the basis of 1000 tons/year production. The cost estimate showed that the cost of producing silicon by all of the new processes is less than the cost by the conventional Siemens process. Using a new process, the cost of producing semiconductor grade silicon was found to be virtually the same with any to the TCS, diclorosilane, and monosilane processes when by-products were recycled. The SOG-Si manufacturing processes using the fluidized bed reactor, which needs further development, shows a greater probablility of cost reduction than the filament processes.

  20. Autonomous Agents for Dynamic Process Planning in the Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Nik Nejad, Hossein Tehrani; Sugimura, Nobuhiro; Iwamura, Koji; Tanimizu, Yoshitaka

    Rapid changes of market demands and pressures of competition require manufacturers to maintain highly flexible manufacturing systems to cope with a complex manufacturing environment. This paper deals with development of an agent-based architecture of dynamic systems for incremental process planning in the manufacturing systems. In consideration of alternative manufacturing processes and machine tools, the process plans and the schedules of the manufacturing resources are generated incrementally and dynamically. A negotiation protocol is discussed, in this paper, to generate suitable process plans for the target products real-timely and dynamically, based on the alternative manufacturing processes. The alternative manufacturing processes are presented by the process plan networks discussed in the previous paper, and the suitable process plans are searched and generated to cope with both the dynamic changes of the product specifications and the disturbances of the manufacturing resources. We initiatively combine the heuristic search algorithms of the process plan networks with the negotiation protocols, in order to generate suitable process plans in the dynamic manufacturing environment.

  1. Implicit Processes, Self-Regulation, and Interventions for Behavior Change

    PubMed Central

    St Quinton, Tom; Brunton, Julie A.

    2017-01-01

    The ability to regulate and subsequently change behavior is influenced by both reflective and implicit processes. Traditional theories have focused on conscious processes by highlighting the beliefs and intentions that influence decision making. However, their success in changing behavior has been modest with a gap between intention and behavior apparent. Dual-process models have been recently applied to health psychology; with numerous models incorporating implicit processes that influence behavior as well as the more common conscious processes. Such implicit processes are theorized to govern behavior non-consciously. The article provides a commentary on motivational and volitional processes and how interventions have combined to attempt an increase in positive health behaviors. Following this, non-conscious processes are discussed in terms of their theoretical underpinning. The article will then highlight how these processes have been measured and will then discuss the different ways that the non-conscious and conscious may interact. The development of interventions manipulating both processes may well prove crucial in successfully altering behavior. PMID:28337164

  2. Models of recognition: a review of arguments in favor of a dual-process account.

    PubMed

    Diana, Rachel A; Reder, Lynne M; Arndt, Jason; Park, Heekyeong

    2006-02-01

    The majority of computationally specified models of recognition memory have been based on a single-process interpretation, claiming that familiarity is the only influence on recognition. There is increasing evidence that recognition is, in fact, based on two processes: recollection and familiarity. This article reviews the current state of the evidence for dual-process models, including the usefulness of the remember/know paradigm, and interprets the relevant results in terms of the source of activation confusion (SAC) model of memory. We argue that the evidence from each of the areas we discuss, when combined, presents a strong case that inclusion of a recollection process is necessary. Given this conclusion, we also argue that the dual-process claim that the recollection process is always available is, in fact, more parsimonious than the single-process claim that the recollection process is used only in certain paradigms. The value of a well-specified process model such as the SAC model is discussed with regard to other types of dual-process models.

  3. Integrating Thermal Tools Into the Mechanical Design Process

    NASA Technical Reports Server (NTRS)

    Tsuyuki, Glenn T.; Siebes, Georg; Novak, Keith S.; Kinsella, Gary M.

    1999-01-01

    The intent of mechanical design is to deliver a hardware product that meets or exceeds customer expectations, while reducing cycle time and cost. To this end, an integrated mechanical design process enables the idea of parallel development (concurrent engineering). This represents a shift from the traditional mechanical design process. With such a concurrent process, there are significant issues that have to be identified and addressed before re-engineering the mechanical design process to facilitate concurrent engineering. These issues also assist in the integration and re-engineering of the thermal design sub-process since it resides within the entire mechanical design process. With these issues in mind, a thermal design sub-process can be re-defined in a manner that has a higher probability of acceptance, thus enabling an integrated mechanical design process. However, the actual implementation is not always problem-free. Experience in applying the thermal design sub-process to actual situations provides the evidence for improvement, but more importantly, for judging the viability and feasibility of the sub-process.

  4. [Monitoring method of extraction process for Schisandrae Chinensis Fructus based on near infrared spectroscopy and multivariate statistical process control].

    PubMed

    Xu, Min; Zhang, Lei; Yue, Hong-Shui; Pang, Hong-Wei; Ye, Zheng-Liang; Ding, Li

    2017-10-01

    To establish an on-line monitoring method for extraction process of Schisandrae Chinensis Fructus, the formula medicinal material of Yiqi Fumai lyophilized injection by combining near infrared spectroscopy with multi-variable data analysis technology. The multivariate statistical process control (MSPC) model was established based on 5 normal batches in production and 2 test batches were monitored by PC scores, DModX and Hotelling T2 control charts. The results showed that MSPC model had a good monitoring ability for the extraction process. The application of the MSPC model to actual production process could effectively achieve on-line monitoring for extraction process of Schisandrae Chinensis Fructus, and can reflect the change of material properties in the production process in real time. This established process monitoring method could provide reference for the application of process analysis technology in the process quality control of traditional Chinese medicine injections. Copyright© by the Chinese Pharmaceutical Association.

  5. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  6. Process yield improvements with process control terminal for varian serial ion implanters

    NASA Astrophysics Data System (ADS)

    Higashi, Harry; Soni, Ameeta; Martinez, Larry; Week, Ken

    Implant processes in a modern wafer production fab are extremely complex. There can be several types of misprocessing, i.e. wrong dose or species, double implants and missed implants. Process Control Terminals (PCT) for Varian 350Ds installed at Intel fabs were found to substantially reduce the number of misprocessing steps. This paper describes those misprocessing steps and their subsequent reduction with use of PCTs. Reliable and simple process control with serial process ion implanters has been in increasing demand. A well designed process control terminal greatly increases device yield by monitoring all pertinent implanter functions and enabling process engineering personnel to set up process recipes for simple and accurate system operation. By programming user-selectable interlocks, implant errors are reduced and those that occur are logged for further analysis and prevention. A process control terminal should also be compatible with office personal computers for greater flexibility in system use and data analysis. The impact from the capability of a process control terminal is increased productivity, ergo higher device yield.

  7. An Aspect-Oriented Framework for Business Process Improvement

    NASA Astrophysics Data System (ADS)

    Pourshahid, Alireza; Mussbacher, Gunter; Amyot, Daniel; Weiss, Michael

    Recently, many organizations invested in Business Process Management Systems (BPMSs) in order to automate and monitor their processes. Business Activity Monitoring is one of the essential modules of a BPMS as it provides the core monitoring capabilities. Although the natural step after process monitoring is process improvement, most of the existing systems do not provide the means to help users with the improvement step. In this paper, we address this issue by proposing an aspect-oriented framework that allows the impact of changes to business processes to be explored with what-if scenarios based on the most appropriate process redesign patterns among several possibilities. As the four cornerstones of a BPMS are process, goal, performance and validation views, these views need to be aligned automatically by any approach that intends to support automated improvement of business processes. Our framework therefore provides means to reflect process changes also in the other views of the business process. A health care case study presented as a proof of concept suggests that this novel approach is feasible.

  8. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  9. Combined mesophilic anaerobic and thermophilic aerobic digestion process for high-strength food wastewater to increase removal efficiency and reduce sludge discharge.

    PubMed

    Jang, H M; Park, S K; Ha, J H; Park, J M

    2014-01-01

    In this study, a process that combines the mesophilic anaerobic digestion (MAD) process with thermophilic aerobic digestion (TAD) for high-strength food wastewater (FWW) treatment was developed to examine the removal of organic matter and methane production. All effluent discharged from the MAD process was separated into solid and liquid portions. The liquid part was discarded and the sludge part was passed to the TAD process for further degradation. Then, the digested sludge from the TAD process was recycled back to the MAD unit to achieve low sludge discharge from the combined process. The reactor combination was operated in two phases: during Phase I, 40 d of total hydraulic retention time (HRT) was applied; during Phase II, 20 d was applied. HRT of the TAD process was fixed at 5 d. For a comparison, a control process (single-stage MAD) was operated with the same HRTs of the combined process. Our results indicated that the combined process showed over 90% total solids, volatile solids and chemical oxygen demand removal efficiencies. In addition, the combined process showed a significantly higher methane production rate than that of the control process. Consequently, the experimental data demonstrated that the combined MAD-TAD process was successfully employed for high-strength FWW treatment with highly efficient organic matter reduction and methane production.

  10. Leading processes of patient care and treatment in hierarchical healthcare organizations in Sweden--process managers' experiences.

    PubMed

    Nilsson, Kerstin; Sandoff, Mette

    2015-01-01

    The purpose of this study is to gain better understanding of the roles and functions of process managers by describing Swedish process managers' experiences of leading processes involving patient care and treatment when working in a hierarchical health-care organization. This study is based on an explorative design. The data were gathered from interviews with 12 process managers at three Swedish hospitals. These data underwent qualitative and interpretative analysis with a modified editing style. The process managers' experiences of leading processes in a hierarchical health-care organization are described under three themes: having or not having a mandate, exposure to conflict situations and leading process development. The results indicate a need for clarity regarding process manager's responsibility and work content, which need to be communicated to all managers and staff involved in the patient care and treatment process, irrespective of department. There also needs to be an emphasis on realistic expectations and orientation of the goals that are an intrinsic part of the task of being a process manager. Generalizations from the results of the qualitative interview studies are limited, but a deeper understanding of the phenomenon was reached, which, in turn, can be transferred to similar settings. This study contributes qualitative descriptions of leading care and treatment processes in a functional, hierarchical health-care organization from process managers' experiences, a subject that has not been investigated earlier.

  11. Information Technology Process Improvement Decision-Making: An Exploratory Study from the Perspective of Process Owners and Process Managers

    ERIC Educational Resources Information Center

    Lamp, Sandra A.

    2012-01-01

    There is information available in the literature that discusses information technology (IT) governance and investment decision making from an executive-level perception, yet there is little information available that offers the perspective of process owners and process managers pertaining to their role in IT process improvement and investment…

  12. 43 CFR 2884.17 - How will BLM process my Processing Category 6 application?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false How will BLM process my Processing...-WAY UNDER THE MINERAL LEASING ACT Applying for MLA Grants or TUPs § 2884.17 How will BLM process my... written agreement that describes how BLM will process your application. The final agreement consists of a...

  13. 43 CFR 2884.17 - How will BLM process my Processing Category 6 application?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false How will BLM process my Processing...-WAY UNDER THE MINERAL LEASING ACT Applying for MLA Grants or TUPs § 2884.17 How will BLM process my... written agreement that describes how BLM will process your application. The final agreement consists of a...

  14. 15 CFR 15.3 - Acceptance of service of process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Acceptance of service of process. 15.3... Process § 15.3 Acceptance of service of process. (a) Except as otherwise provided in this subpart, any... employee by law is to be served personally with process. Service of process in this case is inadequate when...

  15. Weaknesses in Applying a Process Approach in Industry Enterprises

    NASA Astrophysics Data System (ADS)

    Kučerová, Marta; Mĺkva, Miroslava; Fidlerová, Helena

    2012-12-01

    The paper deals with a process approach as one of the main principles of the quality management. Quality management systems based on process approach currently represents one of a proofed ways how to manage an organization. The volume of sales, costs and profit levels are influenced by quality of processes and efficient process flow. As results of the research project showed, there are some weaknesses in applying of the process approach in the industrial routine and it has been often only a formal change of the functional management to process management in many organizations in Slovakia. For efficient process management it is essential that companies take attention to the way how to organize their processes and seek for their continuous improvement.

  16. Is Primary-Process Cognition a Feature of Hypnosis?

    PubMed

    Finn, Michael T; Goldman, Jared I; Lyon, Gyrid B; Nash, Michael R

    2017-01-01

    The division of cognition into primary and secondary processes is an important part of contemporary psychoanalytic metapsychology. Whereas primary processes are most characteristic of unconscious thought and loose associations, secondary processes generally govern conscious thought and logical reasoning. It has been theorized that an induction into hypnosis is accompanied by a predomination of primary-process cognition over secondary-process cognition. The authors hypothesized that highly hypnotizable individuals would demonstrate more primary-process cognition as measured by a recently developed cognitive-perceptual task. This hypothesis was not supported. In fact, low hypnotizable participants demonstrated higher levels of primary-process cognition. Exploratory analyses suggested a more specific effect: felt connectedness to the hypnotist seemed to promote secondary-process cognition among low hypnotizable participants.

  17. [Dual process in large number estimation under uncertainty].

    PubMed

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  18. Object-processing neural efficiency differentiates object from spatial visualizers.

    PubMed

    Motes, Michael A; Malach, Rafael; Kozhevnikov, Maria

    2008-11-19

    The visual system processes object properties and spatial properties in distinct subsystems, and we hypothesized that this distinction might extend to individual differences in visual processing. We conducted a functional MRI study investigating the neural underpinnings of individual differences in object versus spatial visual processing. Nine participants of high object-processing ability ('object' visualizers) and eight participants of high spatial-processing ability ('spatial' visualizers) were scanned, while they performed an object-processing task. Object visualizers showed lower bilateral neural activity in lateral occipital complex and lower right-lateralized neural activity in dorsolateral prefrontal cortex. The data indicate that high object-processing ability is associated with more efficient use of visual-object resources, resulting in less neural activity in the object-processing pathway.

  19. Process simulation for advanced composites production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allendorf, M.D.; Ferko, S.M.; Griffiths, S.

    1997-04-01

    The objective of this project is to improve the efficiency and lower the cost of chemical vapor deposition (CVD) processes used to manufacture advanced ceramics by providing the physical and chemical understanding necessary to optimize and control these processes. Project deliverables include: numerical process models; databases of thermodynamic and kinetic information related to the deposition process; and process sensors and software algorithms that can be used for process control. Target manufacturing techniques include CVD fiber coating technologies (used to deposit interfacial coatings on continuous fiber ceramic preforms), chemical vapor infiltration, thin-film deposition processes used in the glass industry, and coatingmore » techniques used to deposit wear-, abrasion-, and corrosion-resistant coatings for use in the pulp and paper, metals processing, and aluminum industries.« less

  20. CDO budgeting

    NASA Astrophysics Data System (ADS)

    Nesladek, Pavel; Wiswesser, Andreas; Sass, Björn; Mauermann, Sebastian

    2008-04-01

    The Critical dimension off-target (CDO) is a key parameter for mask house customer, affecting directly the performance of the mask. The CDO is the difference between the feature size target and the measured feature size. The change of CD during the process is either compensated within the process or by data correction. These compensation methods are commonly called process bias and data bias, respectively. The difference between data bias and process bias in manufacturing results in systematic CDO error, however, this systematic error does not take into account the instability of the process bias. This instability is a result of minor variations - instabilities of manufacturing processes and changes in materials and/or logistics. Using several masks the CDO of the manufacturing line can be estimated. For systematic investigation of the unit process contribution to CDO and analysis of the factors influencing the CDO contributors, a solid understanding of each unit process and huge number of masks is necessary. Rough identification of contributing processes and splitting of the final CDO variation between processes can be done with approx. 50 masks with identical design, material and process. Such amount of data allows us to identify the main contributors and estimate the effect of them by means of Analysis of variance (ANOVA) combined with multivariate analysis. The analysis does not provide information about the root cause of the variation within the particular unit process, however, it provides a good estimate of the impact of the process on the stability of the manufacturing line. Additionally this analysis can be used to identify possible interaction between processes, which cannot be investigated if only single processes are considered. Goal of this work is to evaluate limits for CDO budgeting models given by the precision and the number of measurements as well as partitioning the variation within the manufacturing process. The CDO variation splits according to the suggested model into contributions from particular processes or process groups. Last but not least the power of this method to determine the absolute strength of each parameter will be demonstrated. Identification of the root cause of this variation within the unit process itself is not scope of this work.

  1. Application of high-throughput mini-bioreactor system for systematic scale-down modeling, process characterization, and control strategy development.

    PubMed

    Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2015-01-01

    High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.

  2. Consumers' conceptualization of ultra-processed foods.

    PubMed

    Ares, Gastón; Vidal, Leticia; Allegue, Gimena; Giménez, Ana; Bandeira, Elisa; Moratorio, Ximena; Molina, Verónika; Curutchet, María Rosa

    2016-10-01

    Consumption of ultra-processed foods has been associated with low diet quality, obesity and other non-communicable diseases. This situation makes it necessary to develop educational campaigns to discourage consumers from substituting meals based on unprocessed or minimally processed foods by ultra-processed foods. In this context, the aim of the present work was to investigate how consumers conceptualize the term ultra-processed foods and to evaluate if the foods they perceive as ultra-processed are in concordance with the products included in the NOVA classification system. An online study was carried out with 2381 participants. They were asked to explain what they understood by ultra-processed foods and to list foods that can be considered ultra-processed. Responses were analysed using inductive coding. The great majority of the participants was able to provide an explanation of what ultra-processed foods are, which was similar to the definition described in the literature. Most of the participants described ultra-processed foods as highly processed products that usually contain additives and other artificial ingredients, stressing that they have low nutritional quality and are unhealthful. The most relevant products for consumers' conceptualization of the term were in agreement with the NOVA classification system and included processed meats, soft drinks, snacks, burgers, powdered and packaged soups and noodles. However, some of the participants perceived processed foods, culinary ingredients and even some minimally processed foods as ultra-processed. This suggests that in order to accurately convey their message, educational campaigns aimed at discouraging consumers from consuming ultra-processed foods should include a clear definition of the term and describe some of their specific characteristics, such as the type of ingredients included in their formulation and their nutritional composition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Rapid communication: Global-local processing affects recognition of distractor emotional faces.

    PubMed

    Srinivasan, Narayanan; Gupta, Rashmi

    2011-03-01

    Recent studies have shown links between happy faces and global, distributed attention as well as sad faces to local, focused attention. Emotions have been shown to affect global-local processing. Given that studies on emotion-cognition interactions have not explored the effect of perceptual processing at different spatial scales on processing stimuli with emotional content, the present study investigated the link between perceptual focus and emotional processing. The study investigated the effects of global-local processing on the recognition of distractor faces with emotional expressions. Participants performed a digit discrimination task with digits at either the global level or the local level presented against a distractor face (happy or sad) as background. The results showed that global processing associated with broad scope of attention facilitates recognition of happy faces, and local processing associated with narrow scope of attention facilitates recognition of sad faces. The novel results of the study provide conclusive evidence for emotion-cognition interactions by demonstrating the effect of perceptual processing on emotional faces. The results along with earlier complementary results on the effect of emotion on global-local processing support a reciprocal relationship between emotional processing and global-local processing. Distractor processing with emotional information also has implications for theories of selective attention.

  4. Tomographical process monitoring of laser transmission welding with OCT

    NASA Astrophysics Data System (ADS)

    Ackermann, Philippe; Schmitt, Robert

    2017-06-01

    Process control of laser processes still encounters many obstacles. Although these processes are stable, a narrow process parameter window during the process or process deviations have led to an increase on the requirements for the process itself and on monitoring devices. Laser transmission welding as a contactless and locally limited joining technique is well-established in a variety of demanding production areas. For example, sensitive parts demand a particle-free joining technique which does not affect the inner components. Inline integrated non-destructive optical measurement systems capable of providing non-invasive tomographical images of the transparent material, the weld seam and its surrounding areas with micron resolution would improve the overall process. Obtained measurement data enable qualitative feedback into the system to adapt parameters for a more robust process. Within this paper we present the inline monitoring device based on Fourier-domain optical coherence tomography developed within the European-funded research project "Manunet Weldable". This device, after adaptation to the laser transmission welding process is optically and mechanically integrated into the existing laser system. The main target lies within the inline process control destined to extract tomographical geometrical measurement data from the weld seam forming process. Usage of this technology makes offline destructive testing of produced parts obsolete. 1,2,3,4

  5. A quality-refinement process for medical imaging applications.

    PubMed

    Neuhaus, J; Maleike, D; Nolden, M; Kenngott, H-G; Meinzer, H-P; Wolf, I

    2009-01-01

    To introduce and evaluate a process for refinement of software quality that is suitable to research groups. In order to avoid constraining researchers too much, the quality improvement process has to be designed carefully. The scope of this paper is to present and evaluate a process to advance quality aspects of existing research prototypes in order to make them ready for initial clinical studies. The proposed process is tailored for research environments and therefore more lightweight than traditional quality management processes. Focus on quality criteria that are important at the given stage of the software life cycle. Usage of tools that automate aspects of the process is emphasized. To evaluate the additional effort that comes along with the process, it was exemplarily applied for eight prototypical software modules for medical image processing. The introduced process has been applied to improve the quality of all prototypes so that they could be successfully used in clinical studies. The quality refinement yielded an average of 13 person days of additional effort per project. Overall, 107 bugs were found and resolved by applying the process. Careful selection of quality criteria and the usage of automated process tools lead to a lightweight quality refinement process suitable for scientific research groups that can be applied to ensure a successful transfer of technical software prototypes into clinical research workflows.

  6. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  7. [Process management in the hospital pharmacy for the improvement of the patient safety].

    PubMed

    Govindarajan, R; Perelló-Juncá, A; Parès-Marimòn, R M; Serrais-Benavente, J; Ferrandez-Martí, D; Sala-Robinat, R; Camacho-Calvente, A; Campabanal-Prats, C; Solà-Anderiu, I; Sanchez-Caparrós, S; Gonzalez-Estrada, J; Martinez-Olalla, P; Colomer-Palomo, J; Perez-Mañosas, R; Rodríguez-Gallego, D

    2013-01-01

    To define a process management model for a hospital pharmacy in order to measure, analyse and make continuous improvements in patient safety and healthcare quality. In order to implement process management, Igualada Hospital was divided into different processes, one of which was the Hospital Pharmacy. A multidisciplinary management team was given responsibility for each process. For each sub-process one person was identified to be responsible, and a working group was formed under his/her leadership. With the help of each working group, a risk analysis using failure modes and effects analysis (FMEA) was performed, and the corresponding improvement actions were implemented. Sub-process indicators were also identified, and different process management mechanisms were introduced. The first risk analysis with FMEA produced more than thirty preventive actions to improve patient safety. Later, the weekly analysis of errors, as well as the monthly analysis of key process indicators, permitted us to monitor process results and, as each sub-process manager participated in these meetings, also to assume accountability and responsibility, thus consolidating the culture of excellence. The introduction of different process management mechanisms, with the participation of people responsible for each sub-process, introduces a participative management tool for the continuous improvement of patient safety and healthcare quality. Copyright © 2012 SECA. Published by Elsevier Espana. All rights reserved.

  8. Distributed processing method for arbitrary view generation in camera sensor network

    NASA Astrophysics Data System (ADS)

    Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki

    2003-05-01

    Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.

  9. EEG alpha synchronization is related to top-down processing in convergent and divergent thinking

    PubMed Central

    Benedek, Mathias; Bergner, Sabine; Könen, Tanja; Fink, Andreas; Neubauer, Aljoscha C.

    2011-01-01

    Synchronization of EEG alpha activity has been referred to as being indicative of cortical idling, but according to more recent evidence it has also been associated with active internal processing and creative thinking. The main objective of this study was to investigate to what extent EEG alpha synchronization is related to internal processing demands and to specific cognitive process involved in creative thinking. To this end, EEG was measured during a convergent and a divergent thinking task (i.e., creativity-related task) which once were processed involving low and once involving high internal processing demands. High internal processing demands were established by masking the stimulus (after encoding) and thus preventing further bottom-up processing. Frontal alpha synchronization was observed during convergent and divergent thinking only under exclusive top-down control (high internal processing demands), but not when bottom-up processing was allowed (low internal processing demands). We conclude that frontal alpha synchronization is related to top-down control rather than to specific creativity-related cognitive processes. Frontal alpha synchronization, which has been observed in a variety of different creativity tasks, thus may not reflect a brain state that is specific for creative cognition but can probably be attributed to high internal processing demands which are typically involved in creative thinking. PMID:21925520

  10. Kennedy Space Center Payload Processing

    NASA Technical Reports Server (NTRS)

    Lawson, Ronnie; Engler, Tom; Colloredo, Scott; Zide, Alan

    2011-01-01

    This slide presentation reviews the payload processing functions at Kennedy Space Center. It details some of the payloads processed at KSC, the typical processing tasks, the facilities available for processing payloads, and the capabilities and customer services that are available.

  11. Improving the Document Development Process: Integrating Relational Data and Statistical Process Control.

    ERIC Educational Resources Information Center

    Miller, John

    1994-01-01

    Presents an approach to document numbering, document titling, and process measurement which, when used with fundamental techniques of statistical process control, reveals meaningful process-element variation as well as nominal productivity models. (SR)

  12. USE OF INDICATOR ORGANISMS FOR DETERMINING PROCESS EFFECTIVENESS

    EPA Science Inventory

    Wastewaters, process effluents and treatment process residuals contain a variety of microorganisms. Many factors influence their densities as they move through collection systems and process equipment. Biological treatment systems rely on the catabolic processes of such microor...

  13. Food processing by high hydrostatic pressure.

    PubMed

    Yamamoto, Kazutaka

    2017-04-01

    High hydrostatic pressure (HHP) process, as a nonthermal process, can be used to inactivate microbes while minimizing chemical reactions in food. In this regard, a HHP level of 100 MPa (986.9 atm/1019.7 kgf/cm 2 ) and more is applied to food. Conventional thermal process damages food components relating color, flavor, and nutrition via enhanced chemical reactions. However, HHP process minimizes the damages and inactivates microbes toward processing high quality safe foods. The first commercial HHP-processed foods were launched in 1990 as fruit products such as jams, and then some other products have been commercialized: retort rice products (enhanced water impregnation), cooked hams and sausages (shelf life extension), soy sauce with minimized salt (short-time fermentation owing to enhanced enzymatic reactions), and beverages (shelf life extension). The characteristics of HHP food processing are reviewed from viewpoints of nonthermal process, history, research and development, physical and biochemical changes, and processing equipment.

  14. [Near infrared spectroscopy based process trajectory technology and its application in monitoring and controlling of traditional Chinese medicine manufacturing process].

    PubMed

    Li, Wen-Long; Qu, Hai-Bin

    2016-10-01

    In this paper, the principle of NIRS (near infrared spectroscopy)-based process trajectory technology was introduced.The main steps of the technique include:① in-line collection of the processes spectra of different technics; ② unfolding of the 3-D process spectra;③ determination of the process trajectories and their normal limits;④ monitoring of the new batches with the established MSPC (multivariate statistical process control) models.Applications of the technology in the chemical and biological medicines were reviewed briefly. By a comprehensive introduction of our feasibility research on the monitoring of traditional Chinese medicine technical process using NIRS-based multivariate process trajectories, several important problems of the practical applications which need urgent solutions are proposed, and also the application prospect of the NIRS-based process trajectory technology is fully discussed and put forward in the end. Copyright© by the Chinese Pharmaceutical Association.

  15. Recollection is a continuous process: implications for dual-process theories of recognition memory.

    PubMed

    Mickes, Laura; Wais, Peter E; Wixted, John T

    2009-04-01

    Dual-process theory, which holds that recognition decisions can be based on recollection or familiarity, has long seemed incompatible with signal detection theory, which holds that recognition decisions are based on a singular, continuous memory-strength variable. Formal dual-process models typically regard familiarity as a continuous process (i.e., familiarity comes in degrees), but they construe recollection as a categorical process (i.e., recollection either occurs or does not occur). A continuous process is characterized by a graded relationship between confidence and accuracy, whereas a categorical process is characterized by a binary relationship such that high confidence is associated with high accuracy but all lower degrees of confidence are associated with chance accuracy. Using a source-memory procedure, we found that the relationship between confidence and source-recollection accuracy was graded. Because recollection, like familiarity, is a continuous process, dual-process theory is more compatible with signal detection theory than previously thought.

  16. A qualitative assessment of a random process proposed as an atmospheric turbulence model

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1977-01-01

    A random process is formed by the product of two Gaussian processes and the sum of that product with a third Gaussian process. The resulting total random process is interpreted as the sum of an amplitude modulated process and a slowly varying, random mean value. The properties of the process are examined, including an interpretation of the process in terms of the physical structure of atmospheric motions. The inclusion of the mean value variation gives an improved representation of the properties of atmospheric motions, since the resulting process can account for the differences in the statistical properties of atmospheric velocity components and their gradients. The application of the process to atmospheric turbulence problems, including the response of aircraft dynamic systems, is examined. The effects of the mean value variation upon aircraft loads are small in most cases, but can be important in the measurement and interpretation of atmospheric turbulence data.

  17. Metals Recovery from Artificial Ore in Case of Printed Circuit Boards, Using Plasmatron Plasma Reactor

    PubMed Central

    Szałatkiewicz, Jakub

    2016-01-01

    This paper presents the investigation of metals production form artificial ore, which consists of printed circuit board (PCB) waste, processed in plasmatron plasma reactor. A test setup was designed and built that enabled research of plasma processing of PCB waste of more than 700 kg/day scale. The designed plasma process is presented and discussed. The process in tests consumed 2 kWh/kg of processed waste. Investigation of the process products is presented with their elemental analyses of metals and slag. The average recovery of metals in presented experiments is 76%. Metals recovered include: Ag, Au, Pd, Cu, Sn, Pb, and others. The chosen process parameters are presented: energy consumption, throughput, process temperatures, and air consumption. Presented technology allows processing of variable and hard-to-process printed circuit board waste that can reach up to 100% of the input mass. PMID:28773804

  18. Characterisation and Processing of Some Iron Ores of India

    NASA Astrophysics Data System (ADS)

    Krishna, S. J. G.; Patil, M. R.; Rudrappa, C.; Kumar, S. P.; Ravi, B. P.

    2013-10-01

    Lack of process characterization data of the ores based on the granulometry, texture, mineralogy, physical, chemical, properties, merits and limitations of process, market and local conditions may mislead the mineral processing entrepreneur. The proper implementation of process characterization and geotechnical map data will result in optimized sustainable utilization of resource by processing. A few case studies of process characterization of some Indian iron ores are dealt with. The tentative ascending order of process refractoriness of iron ores is massive hematite/magnetite < marine black iron oxide sands < laminated soft friable siliceous ore fines < massive banded magnetite quartzite < laminated soft friable clayey aluminous ore fines < massive banded hematite quartzite/jasper < massive clayey hydrated iron oxide ore < manganese bearing iron ores massive < Ti-V bearing magnetite magmatic ore < ferruginous cherty quartzite. Based on diagnostic process characterization, the ores have been classified and generic process have been adopted for some Indian iron ores.

  19. Measuring health care process quality with software quality measures.

    PubMed

    Yildiz, Ozkan; Demirörs, Onur

    2012-01-01

    Existing quality models focus on some specific diseases, clinics or clinical areas. Although they contain structure, process, or output type measures, there is no model which measures quality of health care processes comprehensively. In addition, due to the not measured overall process quality, hospitals cannot compare quality of processes internally and externally. To bring a solution to above problems, a new model is developed from software quality measures. We have adopted the ISO/IEC 9126 software quality standard for health care processes. Then, JCIAS (Joint Commission International Accreditation Standards for Hospitals) measurable elements were added to model scope for unifying functional requirements. Assessment (diagnosing) process measurement results are provided in this paper. After the application, it was concluded that the model determines weak and strong aspects of the processes, gives a more detailed picture for the process quality, and provides quantifiable information to hospitals to compare their processes with multiple organizations.

  20. Thermal Stir Welding: A New Solid State Welding Process

    NASA Technical Reports Server (NTRS)

    Ding, R. Jeffrey

    2003-01-01

    Thermal stir welding is a new welding process developed at NASA's Marshall Space Flight Center in Huntsville, AL. Thermal stir welding is similar to friction stir welding in that it joins similar or dissimilar materials without melting the parent material. However, unlike friction stir welding, the heating, stirring and forging elements of the process are all independent of each other and are separately controlled. Furthermore, the heating element of the process can be either a solid-state process (such as a thermal blanket, induction type process, etc), or, a fusion process (YG laser, plasma torch, etc.) The separation of the heating, stirring, forging elements of the process allows more degrees of freedom for greater process control. This paper introduces the mechanics of the thermal stir welding process. In addition, weld mechanical property data is presented for selected alloys as well as metallurgical analysis.

  1. Thermal Stir Welding: A New Solid State Welding Process

    NASA Technical Reports Server (NTRS)

    Ding, R. Jeffrey; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    Thermal stir welding is a new welding process developed at NASA's Marshall Space Flight Center in Huntsville, AL. Thermal stir welding is similar to friction stir welding in that it joins similar or dissimilar materials without melting the parent material. However, unlike friction stir welding, the heating, stirring and forging elements of the process are all independent of each other and are separately controlled. Furthermore, the heating element of the process can be either a solid-state process (such as a thermal blanket, induction type process, etc), or, a fusion process (YG laser, plasma torch, etc.) The separation of the heating, stirring, forging elements of the process allows more degrees of freedom for greater process control. This paper introduces the mechanics of the thermal stir welding process. In addition, weld mechanical property data is presented for selected alloys as well as metallurgical analysis.

  2. Metals Recovery from Artificial Ore in Case of Printed Circuit Boards, Using Plasmatron Plasma Reactor.

    PubMed

    Szałatkiewicz, Jakub

    2016-08-10

    This paper presents the investigation of metals production form artificial ore, which consists of printed circuit board (PCB) waste, processed in plasmatron plasma reactor. A test setup was designed and built that enabled research of plasma processing of PCB waste of more than 700 kg/day scale. The designed plasma process is presented and discussed. The process in tests consumed 2 kWh/kg of processed waste. Investigation of the process products is presented with their elemental analyses of metals and slag. The average recovery of metals in presented experiments is 76%. Metals recovered include: Ag, Au, Pd, Cu, Sn, Pb, and others. The chosen process parameters are presented: energy consumption, throughput, process temperatures, and air consumption. Presented technology allows processing of variable and hard-to-process printed circuit board waste that can reach up to 100% of the input mass.

  3. The origins of levels-of-processing effects in a conceptual test: evidence for automatic influences of memory from the process-dissociation procedure.

    PubMed

    Bergerbest, Dafna; Goshen-Gottstein, Yonatan

    2002-12-01

    In three experiments, we explored automatic influences of memory in a conceptual memory task, as affected by a levels-of-processing (LoP) manipulation. We also explored the origins of the LoP effect by examining whether the effect emerged only when participants in the shallow condition truncated the perceptual processing (the lexical-processing hypothesis) or even when the entire word was encoded in this condition (the conceptual-processing hypothesis). Using the process-dissociation procedure and an implicit association-generation task, we found that the deep encoding condition yielded higher estimates of automatic influences than the shallow condition. In support of the conceptual processing hypothesis, the LoP effect was found even when the shallow task did not lead to truncated processing of the lexical units. We suggest that encoding for meaning is a prerequisite for automatic processing on conceptual tests of memory.

  4. Exploring business process modelling paradigms and design-time to run-time transitions

    NASA Astrophysics Data System (ADS)

    Caron, Filip; Vanthienen, Jan

    2016-09-01

    The business process management literature describes a multitude of approaches (e.g. imperative, declarative or event-driven) that each result in a different mix of process flexibility, compliance, effectiveness and efficiency. Although the use of a single approach over the process lifecycle is often assumed, transitions between approaches at different phases in the process lifecycle may also be considered. This article explores several business process strategies by analysing the approaches at different phases in the process lifecycle as well as the various transitions.

  5. System Engineering Concept Demonstration, Process Model. Volume 3

    DTIC Science & Technology

    1992-12-01

    Process or Process Model The System Engineering process must be the enactment of the aforementioned definitions. Therefore, a process is an enactment of a...Prototype Tradeoff Scenario demonstrates six levels of abstraction in the Process Model. The Process Model symbology is explained within the "Help" icon ...dnofing no- ubeq t"vidi e /hn -am-a. lmi IzyuO ..pu Row _e._n au"c.ue-w’ ’- anuiildyidwile b ie htplup ?~imsav D symbo ,,ue,.dvu ,,dienl Flw s--..,fu..I

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eun, H.C.; Cho, Y.Z.; Choi, J.H.

    A regeneration process of LiCl-KCl eutectic waste salt generated from the pyrochemical process of spent nuclear fuel has been studied. This regeneration process is composed of a chemical conversion process and a vacuum distillation process. Through the regeneration process, a high efficiency of renewable salt recovery can be obtained from the waste salt and rare earth nuclides in the waste salt can be separated as oxide or phosphate forms. Thus, the regeneration process can contribute greatly to a reduction of the waste volume and a creation of durable final waste forms. (authors)

  7. An open system approach to process reengineering in a healthcare operational environment.

    PubMed

    Czuchry, A J; Yasin, M M; Norris, J

    2000-01-01

    The objective of this study is to examine the applicability of process reengineering in a healthcare operational environment. The intake process of a mental healthcare service delivery system is analyzed systematically to identify process-related problems. A methodology which utilizes an open system orientation coupled with process reengineering is utilized to overcome operational and patient related problems associated with the pre-reengineered intake process. The systematic redesign of the intake process resulted in performance improvements in terms of cost, quality, service and timing.

  8. Developing the JPL Engineering Processes

    NASA Technical Reports Server (NTRS)

    Linick, Dave; Briggs, Clark

    2004-01-01

    This paper briefly recounts the recent history of process reengineering at the NASA Jet Propulsion Laboratory, with a focus on the engineering processes. The JPL process structure is described and the process development activities of the past several years outlined. The main focus of the paper is on the current process structure, the emphasis on the flight project life cycle, the governance approach that lead to Flight Project Practices, and the remaining effort to capture process knowledge at the detail level of the work group.

  9. Water-saving liquid-gas conditioning system

    DOEpatents

    Martin, Christopher; Zhuang, Ye

    2014-01-14

    A method for treating a process gas with a liquid comprises contacting a process gas with a hygroscopic working fluid in order to remove a constituent from the process gas. A system for treating a process gas with a liquid comprises a hygroscopic working fluid comprising a component adapted to absorb or react with a constituent of a process gas, and a liquid-gas contactor for contacting the working fluid and the process gas, wherein the constituent is removed from the process gas within the liquid-gas contactor.

  10. Model for Simulating a Spiral Software-Development Process

    NASA Technical Reports Server (NTRS)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.

  11. Magnitude processing of symbolic and non-symbolic proportions: an fMRI study.

    PubMed

    Mock, Julia; Huber, Stefan; Bloechle, Johannes; Dietrich, Julia F; Bahnmueller, Julia; Rennig, Johannes; Klein, Elise; Moeller, Korbinian

    2018-05-10

    Recent research indicates that processing proportion magnitude is associated with activation in the intraparietal sulcus. Thus, brain areas associated with the processing of numbers (i.e., absolute magnitude) were activated during processing symbolic fractions as well as non-symbolic proportions. Here, we investigated systematically the cognitive processing of symbolic (e.g., fractions and decimals) and non-symbolic proportions (e.g., dot patterns and pie charts) in a two-stage procedure. First, we investigated relative magnitude-related activations of proportion processing. Second, we evaluated whether symbolic and non-symbolic proportions share common neural substrates. We conducted an fMRI study using magnitude comparison tasks with symbolic and non-symbolic proportions, respectively. As an indicator for magnitude-related processing of proportions, the distance effect was evaluated. A conjunction analysis indicated joint activation of specific occipito-parietal areas including right intraparietal sulcus (IPS) during proportion magnitude processing. More specifically, results indicate that the IPS, which is commonly associated with absolute magnitude processing, is involved in processing relative magnitude information as well, irrespective of symbolic or non-symbolic presentation format. However, we also found distinct activation patterns for the magnitude processing of the different presentation formats. Our findings suggest that processing for the separate presentation formats is not only associated with magnitude manipulations in the IPS, but also increasing demands on executive functions and strategy use associated with frontal brain regions as well as visual attention and encoding in occipital regions. Thus, the magnitude processing of proportions may not exclusively reflect processing of number magnitude information but also rather domain-general processes.

  12. [Alcohol-purification technology and its particle sedimentation process in manufactory of Fufang Kushen injection].

    PubMed

    Liu, Xiaoqian; Tong, Yan; Wang, Jinyu; Wang, Ruizhen; Zhang, Yanxia; Wang, Zhimin

    2011-11-01

    Fufang Kushen injection was selected as the model drug, to optimize its alcohol-purification process and understand the characteristics of particle sedimentation process, and to investigate the feasibility of using process analytical technology (PAT) on traditional Chinese medicine (TCM) manufacturing. Total alkaloids (calculated by matrine, oxymatrine, sophoridine and oxysophoridine) and macrozamin were selected as quality evaluation markers to optimize the process of Fufang Kushen injection purification with alcohol. Process parameters of particulate formed in the alcohol-purification, such as the number, density and sedimentation velocity, were also determined to define the sedimentation time and well understand the process. The purification process was optimized as that alcohol is added to the concentrated extract solution (drug material) to certain concentration for 2 times and deposited the alcohol-solution containing drug-material to sediment for some time, i.e. 60% alcohol deposited for 36 hours, filter and then 80% -90% alcohol deposited for 6 hours in turn. The content of total alkaloids was decreased a little during the depositing process. The average settling time of particles with the diameters of 10, 25 microm were 157.7, 25.2 h in the first alcohol-purified process, and 84.2, 13.5 h in the second alcohol-purified process, respectively. The optimized alcohol-purification process remains the marker compositions better and compared with the initial process, it's time saving and much economy. The manufacturing quality of TCM-injection can be controlled by process. PAT pattern must be designed under the well understanding of process of TCM production.

  13. Application of volume-retarded osmosis and low-pressure membrane hybrid process for water reclamation.

    PubMed

    Im, Sung-Ju; Choi, Jungwon; Lee, Jung-Gil; Jeong, Sanghyun; Jang, Am

    2018-03-01

    A new concept of volume-retarded osmosis and low-pressure membrane (VRO-LPM) hybrid process was developed and evaluated for the first time in this study. Commercially available forward osmosis (FO) and ultrafiltration (UF) membranes were employed in a VRO-LPM hybrid process to overcome energy limitations of draw solution (DS) regeneration and production of permeate in the FO process. To evaluate its feasibility as a water reclamation process, and to optimize the operational conditions, cross-flow FO and dead-end mode UF processes were individually evaluated. For the FO process, a DS concentration of 0.15 g mL -1 of polysulfonate styrene (PSS) was determined to be optimal, having a high flux with a low reverse salt flux. The UF membrane with a molecular weight cut-off of 1 kDa was chosen for its high PSS rejection in the LPM process. As a single process, UF (LPM) exhibited a higher flux than FO, but this could be controlled by adjusting the effective membrane area of the FO and UF membranes in the VRO-LPM system. The VRO-LPM hybrid process only required a circulation pump for the FO process. This led to a decrease in the specific energy consumption of the VRO-LPM process for potable water production, that was similar to the single FO process. Therefore, the newly developed VRO-LPM hybrid process, with an appropriate DS selection, can be used as an energy efficient water production method, and can outperform conventional water reclamation processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Quality control process improvement of flexible printed circuit board by FMEA

    NASA Astrophysics Data System (ADS)

    Krasaephol, Siwaporn; Chutima, Parames

    2018-02-01

    This research focuses on the quality control process improvement of Flexible Printed Circuit Board (FPCB), centred around model 7-Flex, by using Failure Mode and Effect Analysis (FMEA) method to decrease proportion of defective finished goods that are found at the final inspection process. Due to a number of defective units that were found at the final inspection process, high scraps may be escaped to customers. The problem comes from poor quality control process which is not efficient enough to filter defective products from in-process because there is no In-Process Quality Control (IPQC) or sampling inspection in the process. Therefore, the quality control process has to be improved by setting inspection gates and IPCQs at critical processes in order to filter the defective products. The critical processes are analysed by the FMEA method. IPQC is used for detecting defective products and reducing chances of defective finished goods escaped to the customers. Reducing proportion of defective finished goods also decreases scrap cost because finished goods incur higher scrap cost than work in-process. Moreover, defective products that are found during process can reflect the abnormal processes; therefore, engineers and operators should timely solve the problems. Improved quality control was implemented for 7-Flex production lines from July 2017 to September 2017. The result shows decreasing of the average proportion of defective finished goods and the average of Customer Manufacturers Lot Reject Rate (%LRR of CMs) equal to 4.5% and 4.1% respectively. Furthermore, cost saving of this quality control process equals to 100K Baht.

  15. Formulating poultry processing sanitizers from alkaline salts of fatty acids

    USDA-ARS?s Scientific Manuscript database

    Though some poultry processing operations remove microorganisms from carcasses; other processing operations cause cross-contamination that spreads microorganisms between carcasses, processing water, and processing equipment. One method used by commercial poultry processors to reduce microbial contam...

  16. Fabrication Process for Cantilever Beam Micromechanical Switches

    DTIC Science & Technology

    1993-08-01

    Beam Design ................................................................... 13 B. Chemistry and Materials Used in Cantilever Beam Process...7 3. Photomask levels and composite...pp 410-413. 5 2. Cantilever Beam Fabrication Process The beam fabrication process incorporates four different photomasking levels with 62 processing

  17. Reports of planetary geology program, 1983

    NASA Technical Reports Server (NTRS)

    Holt, H. E. (Compiler)

    1984-01-01

    Several areas of the Planetary Geology Program were addressed including outer solar system satellites, asteroids, comets, Venus, cratering processes and landform development, volcanic processes, aeolian processes, fluvial processes, periglacial and permafrost processes, geomorphology, remote sensing, tectonics and stratigraphy, and mapping.

  18. Cognitive Processes in Discourse Comprehension: Passive Processes, Reader-Initiated Processes, and Evolving Mental Representations

    ERIC Educational Resources Information Center

    van den Broek, Paul; Helder, Anne

    2017-01-01

    As readers move through a text, they engage in various types of processes that, if all goes well, result in a mental representation that captures their interpretation of the text. With each new text segment the reader engages in passive and, at times, reader-initiated processes. These processes are strongly influenced by the readers'…

  19. The Use of Knowledge Based Decision Support Systems in Reengineering Selected Processes in the U. S. Marine Corps

    DTIC Science & Technology

    2001-09-01

    measurable benefit in terms of process efficiency and effectiveness, business process reengineering (BPR) is becoming increasingly important. BPR suggests...technology by businesses in hopes of achieving a measurable benefit in terms of process efficiency and effectiveness, business process...KOPER-LITE ........................................13 E. HOW MIGHT THE MILITARY BENEFIT FROM PROCESS REENGINEERING EFFORTS

  20. 30 CFR 206.181 - How do I establish processing costs for dual accounting purposes when I do not process the gas?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accounting purposes when I do not process the gas? 206.181 Section 206.181 Mineral Resources MINERALS... Processing Allowances § 206.181 How do I establish processing costs for dual accounting purposes when I do not process the gas? Where accounting for comparison (dual accounting) is required for gas production...

  1. Conceptual models of information processing

    NASA Technical Reports Server (NTRS)

    Stewart, L. J.

    1983-01-01

    The conceptual information processing issues are examined. Human information processing is defined as an active cognitive process that is analogous to a system. It is the flow and transformation of information within a human. The human is viewed as an active information seeker who is constantly receiving, processing, and acting upon the surrounding environmental stimuli. Human information processing models are conceptual representations of cognitive behaviors. Models of information processing are useful in representing the different theoretical positions and in attempting to define the limits and capabilities of human memory. It is concluded that an understanding of conceptual human information processing models and their applications to systems design leads to a better human factors approach.

  2. Industrial application of semantic process mining

    NASA Astrophysics Data System (ADS)

    Espen Ingvaldsen, Jon; Atle Gulla, Jon

    2012-05-01

    Process mining relates to the extraction of non-trivial and useful information from information system event logs. It is a new research discipline that has evolved significantly since the early work on idealistic process logs. Over the last years, process mining prototypes have incorporated elements from semantics and data mining and targeted visualisation techniques that are more user-friendly to business experts and process owners. In this article, we present a framework for evaluating different aspects of enterprise process flows and address practical challenges of state-of-the-art industrial process mining. We also explore the inherent strengths of the technology for more efficient process optimisation.

  3. Reliability and performance of a system-on-a-chip by predictive wear-out based activation of functional components

    DOEpatents

    Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong

    2013-10-01

    A processor-implemented method for determining aging of a processing unit in a processor the method comprising: calculating an effective aging profile for the processing unit wherein the effective aging profile quantifies the effects of aging on the processing unit; combining the effective aging profile with process variation data, actual workload data and operating conditions data for the processing unit; and determining aging through an aging sensor of the processing unit using the effective aging profile, the process variation data, the actual workload data, architectural characteristics and redundancy data, and the operating conditions data for the processing unit.

  4. Fuzzy control of burnout of multilayer ceramic actuators

    NASA Astrophysics Data System (ADS)

    Ling, Alice V.; Voss, David; Christodoulou, Leo

    1996-08-01

    To improve the yield and repeatability of the burnout process of multilayer ceramic actuators (MCAs), an intelligent processing of materials (IPM-based) control system has been developed for the manufacture of MCAs. IPM involves the active (ultimately adaptive) control of a material process using empirical or analytical models and in situ sensing of critical process states (part features and process parameters) to modify the processing conditions in real time to achieve predefined product goals. Thus, the three enabling technologies for the IPM burnout control system are process modeling, in situ sensing and intelligent control. This paper presents the design of an IPM-based control strategy for the burnout process of MCAs.

  5. Direct access inter-process shared memory

    DOEpatents

    Brightwell, Ronald B; Pedretti, Kevin; Hudson, Trammell B

    2013-10-22

    A technique for directly sharing physical memory between processes executing on processor cores is described. The technique includes loading a plurality of processes into the physical memory for execution on a corresponding plurality of processor cores sharing the physical memory. An address space is mapped to each of the processes by populating a first entry in a top level virtual address table for each of the processes. The address space of each of the processes is cross-mapped into each of the processes by populating one or more subsequent entries of the top level virtual address table with the first entry in the top level virtual address table from other processes.

  6. Biotechnology in Food Production and Processing

    NASA Astrophysics Data System (ADS)

    Knorr, Dietrich; Sinskey, Anthony J.

    1985-09-01

    The food processing industry is the oldest and largest industry using biotechnological processes. Further development of food products and processes based on biotechnology depends upon the improvement of existing processes, such as fermentation, immobilized biocatalyst technology, and production of additives and processing aids, as well as the development of new opportunities for food biotechnology. Improvements are needed in the characterization, safety, and quality control of food materials, in processing methods, in waste conversion and utilization processes, and in currently used food microorganism and tissue culture systems. Also needed are fundamental studies of the structure-function relationship of food materials and of the cell physiology and biochemistry of raw materials.

  7. What is a good public participation process? Five perspectives from the public.

    PubMed

    Webler, T; Tuler, S; Krueger, R

    2001-03-01

    It is now widely accepted that members of the public should be involved in environmental decision-making. This has inspired many to search for principles that characterize good public participation processes. In this paper we report on a study that identifies discourses about what defines a good process. Our case study was a forest planning process in northern New England and New York. We employed Q methodology to learn how participants characterize a good process differently, by selecting, defining, and privileging different principles. Five discourses, or perspectives, about good process emerged from our study. One perspective emphasizes that a good process acquires and maintains popular legitimacy. A second sees a good process as one that facilitates an ideological discussion. A third focuses on the fairness of the process. A fourth perspective conceptualizes participatory processes as a power struggle--in this instance a power play between local land-owning interests and outsiders. A fifth perspective highlights the need for leadership and compromise. Dramatic differences among these views suggest an important challenge for those responsible for designing and carrying out public participation processes. Conflicts may emerge about process designs because people disagree about what is good in specific contexts.

  8. Alternating event processes during lifetimes: population dynamics and statistical inference.

    PubMed

    Shinohara, Russell T; Sun, Yifei; Wang, Mei-Cheng

    2018-01-01

    In the literature studying recurrent event data, a large amount of work has been focused on univariate recurrent event processes where the occurrence of each event is treated as a single point in time. There are many applications, however, in which univariate recurrent events are insufficient to characterize the feature of the process because patients experience nontrivial durations associated with each event. This results in an alternating event process where the disease status of a patient alternates between exacerbations and remissions. In this paper, we consider the dynamics of a chronic disease and its associated exacerbation-remission process over two time scales: calendar time and time-since-onset. In particular, over calendar time, we explore population dynamics and the relationship between incidence, prevalence and duration for such alternating event processes. We provide nonparametric estimation techniques for characteristic quantities of the process. In some settings, exacerbation processes are observed from an onset time until death; to account for the relationship between the survival and alternating event processes, nonparametric approaches are developed for estimating exacerbation process over lifetime. By understanding the population dynamics and within-process structure, the paper provide a new and general way to study alternating event processes.

  9. Process mining in oncology using the MIMIC-III dataset

    NASA Astrophysics Data System (ADS)

    Prima Kurniati, Angelina; Hall, Geoff; Hogg, David; Johnson, Owen

    2018-03-01

    Process mining is a data analytics approach to discover and analyse process models based on the real activities captured in information systems. There is a growing body of literature on process mining in healthcare, including oncology, the study of cancer. In earlier work we found 37 peer-reviewed papers describing process mining research in oncology with a regular complaint being the limited availability and accessibility of datasets with suitable information for process mining. Publicly available datasets are one option and this paper describes the potential to use MIMIC-III, for process mining in oncology. MIMIC-III is a large open access dataset of de-identified patient records. There are 134 publications listed as using the MIMIC dataset, but none of them have used process mining. The MIMIC-III dataset has 16 event tables which are potentially useful for process mining and this paper demonstrates the opportunities to use MIMIC-III for process mining in oncology. Our research applied the L* lifecycle method to provide a worked example showing how process mining can be used to analyse cancer pathways. The results and data quality limitations are discussed along with opportunities for further work and reflection on the value of MIMIC-III for reproducible process mining research.

  10. Research on the technique of large-aperture off-axis parabolic surface processing using tri-station machine and its applicability.

    PubMed

    Zhang, Xin; Luo, Xiao; Hu, Haixiang; Zhang, Xuejun

    2015-09-01

    In order to process large-aperture aspherical mirrors, we designed and constructed a tri-station machine processing center with a three station device, which bears vectored feed motion of up to 10 axes. Based on this processing center, an aspherical mirror-processing model is proposed, in which each station implements traversal processing of large-aperture aspherical mirrors using only two axes, while the stations are switchable, thus lowering cost and enhancing processing efficiency. The applicability of the tri-station machine is also analyzed. At the same time, a simple and efficient zero-calibration method for processing is proposed. To validate the processing model, using our processing center, we processed an off-axis parabolic SiC mirror with an aperture diameter of 1450 mm. The experimental results indicate that, with a one-step iterative process, the peak to valley (PV) and root mean square (RMS) of the mirror converged from 3.441 and 0.5203 μm to 2.637 and 0.2962 μm, respectively, where the RMS reduced by 43%. The validity and high accuracy of the model are thereby demonstrated.

  11. Patterning of Indium Tin Oxide Films

    NASA Technical Reports Server (NTRS)

    Immer, Christopher

    2008-01-01

    A relatively rapid, economical process has been devised for patterning a thin film of indium tin oxide (ITO) that has been deposited on a polyester film. ITO is a transparent, electrically conductive substance made from a mixture of indium oxide and tin oxide that is commonly used in touch panels, liquid-crystal and plasma display devices, gas sensors, and solar photovoltaic panels. In a typical application, the ITO film must be patterned to form electrodes, current collectors, and the like. Heretofore it has been common practice to pattern an ITO film by means of either a laser ablation process or a photolithography/etching process. The laser ablation process includes the use of expensive equipment to precisely position and focus a laser. The photolithography/etching process is time-consuming. The present process is a variant of the direct toner process an inexpensive but often highly effective process for patterning conductors for printed circuits. Relative to a conventional photolithography/ etching process, this process is simpler, takes less time, and is less expensive. This process involves equipment that costs less than $500 (at 2005 prices) and enables patterning of an ITO film in a process time of less than about a half hour.

  12. Assessment of Process Capability: the case of Soft Drinks Processing Unit

    NASA Astrophysics Data System (ADS)

    Sri Yogi, Kottala

    2018-03-01

    The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dafler, J.R.; Sinnott, J.; Novil, M.

    The first phase of a study to identify candidate processes and products suitable for future exploitation using high-temperature solar energy is presented. This phase has been principally analytical, consisting of techno-economic studies, thermodynamic assessments of chemical reactions and processes, and the determination of market potentials for major chemical commodities that use significant amounts of fossil resources today. The objective was to identify energy-intensive processes that would be suitable for the production of chemicals and fuels using solar energy process heat. Of particular importance was the comparison of relative costs and energy requirements for the selected solar product versus costs formore » the product derived from conventional processing. The assessment methodology used a systems analytical approach to identify processes and products having the greatest potential for solar energy-thermal processing. This approach was used to establish the basis for work to be carried out in subsequent phases of development. It has been the intent of the program to divide the analysis and process identification into the following three distinct areas: (1) process selection, (2) process evaluation, and (3) ranking of processes. Four conventional processes were selected for assessment namely, methanol synthesis, styrene monomer production, vinyl chloride monomer production, and terephthalic acid production.« less

  14. An Application of X-Ray Fluorescence as Process Analytical Technology (PAT) to Monitor Particle Coating Processes.

    PubMed

    Nakano, Yoshio; Katakuse, Yoshimitsu; Azechi, Yasutaka

    2018-06-01

    An attempt to apply X-Ray Fluorescence (XRF) analysis to evaluate small particle coating process as a Process Analytical Technologies (PAT) was made. The XRF analysis was used to monitor coating level in small particle coating process with at-line manner. The small particle coating process usually consists of multiple coating processes. This study was conducted by a simple coating particles prepared by first coating of a model compound (DL-methionine) and second coating by talc on spherical microcrystalline cellulose cores. The particles with two layered coating are enough to demonstrate the small particle coating process. From the result by the small particle coating process, it was found that the XRF signal played different roles, resulting that XRF signals by first coating (layering) and second coating (mask coating) could demonstrate the extent with different mechanisms for the coating process. Furthermore, the particle coating of the different particle size has also been investigated to evaluate size effect of these coating processes. From these results, it was concluded that the XRF could be used as a PAT in monitoring particle coating processes and become powerful tool in pharmaceutical manufacturing.

  15. Single-Run Single-Mask Inductively-Coupled-Plasma Reactive-Ion-Etching Process for Fabricating Suspended High-Aspect-Ratio Microstructures

    NASA Astrophysics Data System (ADS)

    Yang, Yao-Joe; Kuo, Wen-Cheng; Fan, Kuang-Chao

    2006-01-01

    In this work, we present a single-run single-mask (SRM) process for fabricating suspended high-aspect-ratio structures on standard silicon wafers using an inductively coupled plasma-reactive ion etching (ICP-RIE) etcher. This process eliminates extra fabrication steps which are required for structure release after trench etching. Released microstructures with 120 μm thickness are obtained by this process. The corresponding maximum aspect ratio of the trench is 28. The SRM process is an extended version of the standard process proposed by BOSCH GmbH (BOSCH process). The first step of the SRM process is a standard BOSCH process for trench etching, then a polymer layer is deposited on trench sidewalls as a protective layer for the subsequent structure-releasing step. The structure is released by dry isotropic etching after the polymer layer on the trench floor is removed. All the steps can be integrated into a single-run ICP process. Also, only one mask is required. Therefore, the process complexity and fabrication cost can be effectively reduced. Discussions on each SRM step and considerations for avoiding undesired etching of the silicon structures during the release process are also presented.

  16. Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L

    2012-04-01

    Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.

  17. Discrete State Change Model of Manufacturing Quality to Aid Assembly Process Design

    NASA Astrophysics Data System (ADS)

    Koga, Tsuyoshi; Aoyama, Kazuhiro

    This paper proposes a representation model of the quality state change in an assembly process that can be used in a computer-aided process design system. In order to formalize the state change of the manufacturing quality in the assembly process, the functions, operations, and quality changes in the assembly process are represented as a network model that can simulate discrete events. This paper also develops a design method for the assembly process. The design method calculates the space of quality state change and outputs a better assembly process (better operations and better sequences) that can be used to obtain the intended quality state of the final product. A computational redesigning algorithm of the assembly process that considers the manufacturing quality is developed. The proposed method can be used to design an improved manufacturing process by simulating the quality state change. A prototype system for planning an assembly process is implemented and applied to the design of an auto-breaker assembly process. The result of the design example indicates that the proposed assembly process planning method outputs a better manufacturing scenario based on the simulation of the quality state change.

  18. Effect of simulated mechanical recycling processes on the structure and properties of poly(lactic acid).

    PubMed

    Beltrán, F R; Lorenzo, V; Acosta, J; de la Orden, M U; Martínez Urreaga, J

    2018-06-15

    The aim of this work is to study the effects of different simulated mechanical recycling processes on the structure and properties of PLA. A commercial grade of PLA was melt compounded and compression molded, then subjected to two different recycling processes. The first recycling process consisted of an accelerated ageing and a second melt processing step, while the other recycling process included an accelerated ageing, a demanding washing process and a second melt processing step. The intrinsic viscosity measurements indicate that both recycling processes produce a degradation in PLA, which is more pronounced in the sample subjected to the washing process. DSC results suggest an increase in the mobility of the polymer chains in the recycled materials; however the degree of crystallinity of PLA seems unchanged. The optical, mechanical and gas barrier properties of PLA do not seem to be largely affected by the degradation suffered during the different recycling processes. These results suggest that, despite the degradation of PLA, the impact of the different simulated mechanical recycling processes on the final properties is limited. Thus, the potential use of recycled PLA in packaging applications is not jeopardized. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Consumption of ultra-processed foods predicts diet quality in Canada.

    PubMed

    Moubarac, Jean-Claude; Batal, M; Louzada, M L; Martinez Steele, E; Monteiro, C A

    2017-01-01

    This study describes food consumption patterns in Canada according to the types of food processing using the Nova classification and investigates the association between consumption of ultra-processed foods and the nutrient profile of the diet. Dietary intakes of 33,694 individuals from the 2004 Canadian Community Health Survey aged 2 years and above were analyzed. Food and drinks were classified using Nova into unprocessed or minimally processed foods, processed culinary ingredients, processed foods and ultra-processed foods. Average consumption (total daily energy intake) and relative consumption (% of total energy intake) provided by each of the food groups were calculated. Consumption of ultra-processed foods according to sex, age, education, residential location and relative family revenue was assessed. Mean nutrient content of ultra-processed foods and non-ultra-processed foods were compared, and the average nutrient content of the overall diet across quintiles of dietary share of ultra-processed foods was measured. In 2004, 48% of calories consumed by Canadians came from ultra-processed foods. Consumption of such foods was high amongst all socioeconomic groups, and particularly in children and adolescents. As a group, ultra-processed foods were grossly nutritionally inferior to non-ultra-processed foods. After adjusting for covariates, a significant and positive relationship was found between the dietary share of ultra-processed foods and the content in carbohydrates, free sugars, total and saturated fats and energy density, while an inverse relationship was observed with the dietary content in protein, fiber, vitamins A, C, D, B6 and B12, niacin, thiamine, riboflavin, as well as zinc, iron, magnesium, calcium, phosphorus and potassium. Lowering the dietary share of ultra-processed foods and raising consumption of hand-made meals from unprocessed or minimally processed foods would substantially improve the diet quality of Canadian. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Development of Statistical Process Control Methodology for an Environmentally Compliant Surface Cleaning Process in a Bonding Laboratory

    NASA Technical Reports Server (NTRS)

    Hutchens, Dale E.; Doan, Patrick A.; Boothe, Richard E.

    1997-01-01

    Bonding labs at both MSFC and the northern Utah production plant prepare bond test specimens which simulate or witness the production of NASA's Reusable Solid Rocket Motor (RSRM). The current process for preparing the bonding surfaces employs 1,1,1-trichloroethane vapor degreasing, which simulates the current RSRM process. Government regulations (e.g., the 1990 Amendments to the Clean Air Act) have mandated a production phase-out of a number of ozone depleting compounds (ODC) including 1,1,1-trichloroethane. In order to comply with these regulations, the RSRM Program is qualifying a spray-in-air (SIA) precision cleaning process using Brulin 1990, an aqueous blend of surfactants. Accordingly, surface preparation prior to bonding process simulation test specimens must reflect the new production cleaning process. The Bonding Lab Statistical Process Control (SPC) program monitors the progress of the lab and its capabilities, as well as certifies the bonding technicians, by periodically preparing D6AC steel tensile adhesion panels with EA-91 3NA epoxy adhesive using a standardized process. SPC methods are then used to ensure the process is statistically in control, thus producing reliable data for bonding studies, and identify any problems which might develop. Since the specimen cleaning process is being changed, new SPC limits must be established. This report summarizes side-by-side testing of D6AC steel tensile adhesion witness panels and tapered double cantilevered beams (TDCBs) using both the current baseline vapor degreasing process and a lab-scale spray-in-air process. A Proceco 26 inches Typhoon dishwasher cleaned both tensile adhesion witness panels and TDCBs in a process which simulates the new production process. The tests were performed six times during 1995, subsequent statistical analysis of the data established new upper control limits (UCL) and lower control limits (LCL). The data also demonstrated that the new process was equivalent to the vapor degreasing process.

  1. Evaluation of stabilization techniques for ion implant processing

    NASA Astrophysics Data System (ADS)

    Ross, Matthew F.; Wong, Selmer S.; Minter, Jason P.; Marlowe, Trey; Narcy, Mark E.; Livesay, William R.

    1999-06-01

    With the integration of high current ion implant processing into volume CMOS manufacturing, the need for photoresist stabilization to achieve a stable ion implant process is critical. This study compares electron beam stabilization, a non-thermal process, with more traditional thermal stabilization techniques such as hot plate baking and vacuum oven processing. The electron beam processing is carried out in a flood exposure system with no active heating of the wafer. These stabilization techniques are applied to typical ion implant processes that might be found in a CMOS production process flow. The stabilization processes are applied to a 1.1 micrometers thick PFI-38A i-line photoresist film prior to ion implant processing. Post stabilization CD variation is detailed with respect to wall slope and feature integrity. SEM photographs detail the effects of the stabilization technique on photoresist features. The thermal stability of the photoresist is shown for different levels of stabilization and post stabilization thermal cycling. Thermal flow stability of the photoresist is detailed via SEM photographs. A significant improvement in thermal stability is achieved with the electron beam process, such that photoresist features are stable to temperatures in excess of 200 degrees C. Ion implant processing parameters are evaluated and compared for the different stabilization methods. Ion implant system end-station chamber pressure is detailed as a function of ion implant process and stabilization condition. The ion implant process conditions are detailed for varying factors such as ion current, energy, and total dose. A reduction in the ion implant systems end-station chamber pressure is achieved with the electron beam stabilization process over the other techniques considered. This reduction in end-station chamber pressure is shown to provide a reduction in total process time for a given ion implant dose. Improvements in the ion implant process are detailed across several combinations of current and energy.

  2. The prevalence of medial coronoid process disease is high in lame large breed dogs and quantitative radiographic assessments contribute to the diagnosis.

    PubMed

    Mostafa, Ayman; Nolte, Ingo; Wefstaedt, Patrick

    2018-06-05

    Medial coronoid process disease is a common leading cause of thoracic limb lameness in dogs. Computed tomography and arthroscopy are superior to radiography to diagnose medial coronoid process disease, however, radiography remains the most available diagnostic imaging modality in veterinary practice. Objectives of this retrospective observational study were to describe the prevalence of medial coronoid process disease in lame large breed dogs and apply a novel method for quantifying the radiographic changes associated with medial coronoid process and subtrochlear-ulnar region in Labrador and Golden Retrievers with confirmed medial coronoid process disease. Purebred Labrador and Golden Retrievers (n = 143, 206 elbows) without and with confirmed medial coronoid process disease were included. The prevalence of medial coronoid process disease in lame large breed dogs was calculated. Mediolateral and craniocaudal radiographs of elbows were analyzed to assess the medial coronoid process length and morphology, and subtrochlear-ulnar width. Mean grayscale value was calculated for radial and subtrochlear-ulnar zones. The prevalence of medial coronoid process disease was 20.8%. Labrador and Golden Retrievers were the most affected purebred dogs (29.6%). Elbows with confirmed medial coronoid process disease had short (P < 0.0001) and deformed (∼95%) medial coronoid process, with associated medial coronoid process osteophytosis (7.5%). Subtrochlear-ulnar sclerosis was evidenced in ∼96% of diseased elbows, with a significant increase (P < 0.0001) in subtrochlear-ulnar width and standardized grayscale value. Radial grayscale value did not differ between groups. Periarticular osteophytosis was identified in 51.4% of elbows with medial coronoid process disease. Medial coronoid process length and morphology, and subtrochlear-ulnar width and standardized grayscale value varied significantly in dogs with confirmed medial coronoid process disease compared to controls. Findings indicated that medial coronoid process disease has a high prevalence in lame large breed dogs and that quantitative radiographic assessments can contribute to the diagnosis. © 2018 American College of Veterinary Radiology.

  3. The role of rational and experiential processing in influencing the framing effect.

    PubMed

    Stark, Emily; Baldwin, Austin S; Hertel, Andrew W; Rothman, Alexander J

    2017-01-01

    Research on individual differences and the framing effect has focused primarily on how variability in rational processing influences choice. However, we propose that measuring only rational processing presents an incomplete picture of how participants are responding to framed options, as orthogonal individual differences in experiential processing might be relevant. In two studies, we utilize the Rational Experiential Inventory, which captures individual differences in rational and experiential processing, to investigate how both processing types influence decisions. Our results show that differences in experiential processing, but not rational processing, moderated the effect of frame on choice. We suggest that future research should more closely examine the influence of experiential processing on making decisions, to gain a broader understanding of the conditions that contribute to the framing effect.

  4. Study and Analysis of The Robot-Operated Material Processing Systems (ROMPS)

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.

    1996-01-01

    This is a report presenting the progress of a research grant funded by NASA for work performed during 1 Oct. 1994 - 31 Sep. 1995. The report deals with the development and investigation of potential use of software for data processing for the Robot Operated Material Processing System (ROMPS). It reports on the progress of data processing of calibration samples processed by ROMPS in space and on earth. First data were retrieved using the I/O software and manually processed using MicroSoft Excel. Then the data retrieval and processing process was automated using a program written in C which is able to read the telemetry data and produce plots of time responses of sample temperatures and other desired variables. LabView was also employed to automatically retrieve and process the telemetry data.

  5. Application of statistical process control and process capability analysis procedures in orbiter processing activities at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Safford, Robert R.; Jackson, Andrew E.; Swart, William W.; Barth, Timothy S.

    1994-01-01

    Successful ground processing at KSC requires that flight hardware and ground support equipment conform to specifications at tens of thousands of checkpoints. Knowledge of conformance is an essential requirement for launch. That knowledge of conformance at every requisite point does not, however, enable identification of past problems with equipment, or potential problem areas. This paper describes how the introduction of Statistical Process Control and Process Capability Analysis identification procedures into existing shuttle processing procedures can enable identification of potential problem areas and candidates for improvements to increase processing performance measures. Results of a case study describing application of the analysis procedures to Thermal Protection System processing are used to illustrate the benefits of the approaches described in the paper.

  6. Separate cortical networks involved in music perception: preliminary functional MRI evidence for modularity of music processing.

    PubMed

    Schmithorst, Vincent J

    2005-04-01

    Music perception is a quite complex cognitive task, involving the perception and integration of various elements including melody, harmony, pitch, rhythm, and timbre. A preliminary functional MRI investigation of music perception was performed, using a simplified passive listening task. Group independent component analysis (ICA) was used to separate out various components involved in music processing, as the hemodynamic responses are not known a priori. Various components consistent with auditory processing, expressive language, syntactic processing, and visual association were found. The results are discussed in light of various hypotheses regarding modularity of music processing and its overlap with language processing. The results suggest that, while some networks overlap with ones used for language processing, music processing may involve its own domain-specific processing subsystems.

  7. Industrial implementation of spatial variability control by real-time SPC

    NASA Astrophysics Data System (ADS)

    Roule, O.; Pasqualini, F.; Borde, M.

    2016-10-01

    Advanced technology nodes require more and more information to get the wafer process well setup. The critical dimension of components decreases following Moore's law. At the same time, the intra-wafer dispersion linked to the spatial non-uniformity of tool's processes is not capable to decrease in the same proportions. APC systems (Advanced Process Control) are being developed in waferfab to automatically adjust and tune wafer processing, based on a lot of process context information. It can generate and monitor complex intrawafer process profile corrections between different process steps. It leads us to put under control the spatial variability, in real time by our SPC system (Statistical Process Control). This paper will outline the architecture of an integrated process control system for shape monitoring in 3D, implemented in waferfab.

  8. Modeling and analysis of power processing systems: Feasibility investigation and formulation of a methodology

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.

    1974-01-01

    A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.

  9. Laser displacement sensor to monitor the layup process of composite laminate production

    NASA Astrophysics Data System (ADS)

    Miesen, Nick; Groves, Roger M.; Sinke, Jos; Benedictus, Rinze

    2013-04-01

    Several types of flaw can occur during the layup process of prepreg composite laminates. Quality control after the production process checks the end product by testing the specimens for flaws which are included during the layup process or curing process, however by then these flaws are already irreversibly embedded in the laminate. This paper demonstrates the use of a laser displacement sensor technique applied during the layup process of prepreg laminates for in-situ flaw detection, for typical flaws that can occur during the composite production process. An incorrect number of layers and fibre wrinkling are dominant flaws during the process of layup. These and other dominant flaws have been modeled to determine the requirements for an in-situ monitoring during the layup process of prepreg laminates.

  10. Levels of integration in cognitive control and sequence processing in the prefrontal cortex.

    PubMed

    Bahlmann, Jörg; Korb, Franziska M; Gratton, Caterina; Friederici, Angela D

    2012-01-01

    Cognitive control is necessary to flexibly act in changing environments. Sequence processing is needed in language comprehension to build the syntactic structure in sentences. Functional imaging studies suggest that sequence processing engages the left ventrolateral prefrontal cortex (PFC). In contrast, cognitive control processes additionally recruit bilateral rostral lateral PFC regions. The present study aimed to investigate these two types of processes in one experimental paradigm. Sequence processing was manipulated using two different sequencing rules varying in complexity. Cognitive control was varied with different cue-sets that determined the choice of a sequencing rule. Univariate analyses revealed distinct PFC regions for the two types of processing (i.e. sequence processing: left ventrolateral PFC and cognitive control processing: bilateral dorsolateral and rostral PFC). Moreover, in a common brain network (including left lateral PFC and intraparietal sulcus) no interaction between sequence and cognitive control processing was observed. In contrast, a multivariate pattern analysis revealed an interaction of sequence and cognitive control processing, such that voxels in left lateral PFC and parietal cortex showed different tuning functions for tasks involving different sequencing and cognitive control demands. These results suggest that the difference between the process of rule selection (i.e. cognitive control) and the process of rule-based sequencing (i.e. sequence processing) find their neuronal underpinnings in distinct activation patterns in lateral PFC. Moreover, the combination of rule selection and rule sequencing can shape the response of neurons in lateral PFC and parietal cortex.

  11. Levels of Integration in Cognitive Control and Sequence Processing in the Prefrontal Cortex

    PubMed Central

    Bahlmann, Jörg; Korb, Franziska M.; Gratton, Caterina; Friederici, Angela D.

    2012-01-01

    Cognitive control is necessary to flexibly act in changing environments. Sequence processing is needed in language comprehension to build the syntactic structure in sentences. Functional imaging studies suggest that sequence processing engages the left ventrolateral prefrontal cortex (PFC). In contrast, cognitive control processes additionally recruit bilateral rostral lateral PFC regions. The present study aimed to investigate these two types of processes in one experimental paradigm. Sequence processing was manipulated using two different sequencing rules varying in complexity. Cognitive control was varied with different cue-sets that determined the choice of a sequencing rule. Univariate analyses revealed distinct PFC regions for the two types of processing (i.e. sequence processing: left ventrolateral PFC and cognitive control processing: bilateral dorsolateral and rostral PFC). Moreover, in a common brain network (including left lateral PFC and intraparietal sulcus) no interaction between sequence and cognitive control processing was observed. In contrast, a multivariate pattern analysis revealed an interaction of sequence and cognitive control processing, such that voxels in left lateral PFC and parietal cortex showed different tuning functions for tasks involving different sequencing and cognitive control demands. These results suggest that the difference between the process of rule selection (i.e. cognitive control) and the process of rule-based sequencing (i.e. sequence processing) find their neuronal underpinnings in distinct activation patterns in lateral PFC. Moreover, the combination of rule selection and rule sequencing can shape the response of neurons in lateral PFC and parietal cortex. PMID:22952762

  12. Flow chemistry using milli- and microstructured reactors-from conventional to novel process windows.

    PubMed

    Illg, Tobias; Löb, Patrick; Hessel, Volker

    2010-06-01

    The terminology Novel Process Window unites different methods to improve existing processes by applying unconventional and harsh process conditions like: process routes at much elevated pressure, much elevated temperature, or processing in a thermal runaway regime to achieve a significant impact on process performance. This paper is a review of parts of IMM's works in particular the applicability of above mentioned Novel Process Windows on selected chemical reactions. First, general characteristics of microreactors are discussed like excellent mass and heat transfer and improved mixing quality. Different types of reactions are presented in which the use of microstructured devices led to an increased process performance by applying Novel Process Windows. These examples were chosen to demonstrate how chemical reactions can benefit from the use of milli- and microstructured devices and how existing protocols can be changed toward process conditions hitherto not applicable in standard laboratory equipment. The used milli- and microstructured reactors can also offer advantages in other areas, for example, high-throughput screening of catalysts and better control of size distribution in a particle synthesis process by improved mixing, etc. The chemical industry is under continuous improvement. So, a lot of research is being done to synthesize high value chemicals, to optimize existing processes in view of process safety and energy consumption and to search for new routes to produce such chemicals. Leitmotifs of such undertakings are often sustainable development(1) and Green Chemistry(2).

  13. Fast but fleeting: adaptive motor learning processes associated with aging and cognitive decline.

    PubMed

    Trewartha, Kevin M; Garcia, Angeles; Wolpert, Daniel M; Flanagan, J Randall

    2014-10-01

    Motor learning has been shown to depend on multiple interacting learning processes. For example, learning to adapt when moving grasped objects with novel dynamics involves a fast process that adapts and decays quickly-and that has been linked to explicit memory-and a slower process that adapts and decays more gradually. Each process is characterized by a learning rate that controls how strongly motor memory is updated based on experienced errors and a retention factor determining the movement-to-movement decay in motor memory. Here we examined whether fast and slow motor learning processes involved in learning novel dynamics differ between younger and older adults. In addition, we investigated how age-related decline in explicit memory performance influences learning and retention parameters. Although the groups adapted equally well, they did so with markedly different underlying processes. Whereas the groups had similar fast processes, they had different slow processes. Specifically, the older adults exhibited decreased retention in their slow process compared with younger adults. Within the older group, who exhibited considerable variation in explicit memory performance, we found that poor explicit memory was associated with reduced retention in the fast process, as well as the slow process. These findings suggest that explicit memory resources are a determining factor in impairments in the both the fast and slow processes for motor learning but that aging effects on the slow process are independent of explicit memory declines. Copyright © 2014 the authors 0270-6474/14/3413411-11$15.00/0.

  14. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  15. OCLC-MARC Tape Processing: A Functional Analysis.

    ERIC Educational Resources Information Center

    Miller, Bruce Cummings

    1984-01-01

    Analyzes structure of, and data in, the OCLC-MARC record in the form delivered via OCLC's Tape Subscription Service, and outlines important processing functions involved: "unreadable tapes," duplicate records and deduping, match processing, choice processing, locations processing, "automatic" and "input" stamps,…

  16. 7 Processes that Enable NASA Software Engineering Technologies: Value-Added Process Engineering

    NASA Technical Reports Server (NTRS)

    Housch, Helen; Godfrey, Sally

    2011-01-01

    The presentation reviews Agency process requirements and the purpose, benefits, and experiences or seven software engineering processes. The processes include: product integration, configuration management, verification, software assurance, measurement and analysis, requirements management, and planning and monitoring.

  17. Risk-based Strategy to Determine Testing Requirement for the Removal of Residual Process Reagents as Process-related Impurities in Bioprocesses.

    PubMed

    Qiu, Jinshu; Li, Kim; Miller, Karen; Raghani, Anil

    2015-01-01

    The purpose of this article is to recommend a risk-based strategy for determining clearance testing requirements of the process reagents used in manufacturing biopharmaceutical products. The strategy takes account of four risk factors. Firstly, the process reagents are classified into two categories according to their safety profile and history of use: generally recognized as safe (GRAS) and potential safety concern (PSC) reagents. The clearance testing of GRAS reagents can be eliminated because of their safe use historically and process capability to remove these reagents. An estimated safety margin (Se) value, a ratio of the exposure limit to the estimated maximum reagent amount, is then used to evaluate the necessity for testing the PSC reagents at an early development stage. The Se value is calculated from two risk factors, the starting PSC reagent amount per maximum product dose (Me), and the exposure limit (Le). A worst-case scenario is assumed to estimate the Me value, that is common. The PSC reagent of interest is co-purified with the product and no clearance occurs throughout the entire purification process. No clearance testing is required for this PSC reagent if its Se value is ≥1; otherwise clearance testing is needed. Finally, the point of the process reagent introduction to the process is also considered in determining the necessity of the clearance testing for process reagents. How to use the measured safety margin as a criterion for determining PSC reagent testing at process characterization, process validation, and commercial production stages are also described. A large number of process reagents are used in the biopharmaceutical manufacturing to control the process performance. Clearance testing for all of the process reagents will be an enormous analytical task. In this article, a risk-based strategy is described to eliminate unnecessary clearance testing for majority of the process reagents using four risk factors. The risk factors included in the strategy are (i) safety profile of the reagents, (ii) the starting amount of the process reagents used in the manufacturing process, (iii) the maximum dose of the product, and (iv) the point of introduction of the process reagents in the process. The implementation of the risk-based strategy can eliminate clearance testing for approximately 90% of the process reagents used in the manufacturing processes. This science-based strategy allows us to ensure patient safety and meet regulatory agency expectations throughout the product development life cycle. © PDA, Inc. 2015.

  18. Titania nanotube powders obtained by rapid breakdown anodization in perchloric acid electrolytes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Saima, E-mail: saima.ali@aalto.fi; Hannula, Simo-Pekka

    Titania nanotube (TNT) powders are prepared by rapid break down anodization (RBA) in a 0.1 M perchloric acid (HClO{sub 4}) solution (Process 1), and ethylene glycol (EG) mixture with HClO{sub 4} and water (Process 2). A study of the as-prepared and calcined TNT powders obtained by both processes is implemented to evaluate and compare the morphology, crystal structure, specific surface area, and the composition of the nanotubes. Longer TNTs are formed in Process 1, while comparatively larger pore diameter and wall thickness are obtained for the nanotubes prepared by Process 2. The TNTs obtained by Process 1 are converted tomore » nanorods at 350 °C, while nanotubes obtained by Process 2 preserve tubular morphology till 350 °C. In addition, the TNTs prepared by an aqueous electrolyte have a crystalline structure, whereas the TNTs obtained by Process 2 are amorphous. Samples calcined till 450 °C have XRD peaks from the anatase phase, while the rutile phase appears at 550 °C for the TNTs prepared by both processes. The Raman spectra also show clear anatase peaks for all samples except the as-prepared sample obtained by Process 2, thus supporting the XRD findings. FTIR spectra reveal the presence of O-H groups in the structure for the TNTs obtained by both processes. However, the presence is less prominent for annealed samples. Additionally, TNTs obtained by Process 2 have a carbonaceous impurity present in the structure attributed to the electrolyte used in that process. While a negligible weight loss is typical for TNTs prepared from aqueous electrolytes, a weight loss of 38.6% in the temperature range of 25–600 °C is found for TNTs prepared in EG electrolyte (Process 2). A large specific surface area of 179.2 m{sup 2} g{sup −1} is obtained for TNTs prepared by Process 1, whereas Process 2 produces nanotubes with a lower specific surface area. The difference appears to correspond to the dimensions of the nanotubes obtained by the two processes. - Graphical abstract: Titania nanotube powders prepared by Process 1 and Process 2 have different crystal structure and specific surface area. - Highlights: • Titania nanotube (TNT) powder is prepared in low water organic electrolyte. • Characterization of TNT powders prepared from aqueous and organic electrolyte. • TNTs prepared by Process 1 are crystalline with higher specific surface area. • TNTs obtained by Process 2 have carbonaceous impurities in the structure.« less

  19. A processing approach to the working memory/long-term memory distinction: evidence from the levels-of-processing span task.

    PubMed

    Rose, Nathan S; Craik, Fergus I M

    2012-07-01

    Recent theories suggest that performance on working memory (WM) tasks involves retrieval from long-term memory (LTM). To examine whether WM and LTM tests have common principles, Craik and Tulving's (1975) levels-of-processing paradigm, which is known to affect LTM, was administered as a WM task: Participants made uppercase, rhyme, or category-membership judgments about words, and immediate recall of the words was required after every 3 or 8 processing judgments. In Experiment 1, immediate recall did not demonstrate a levels-of-processing effect, but a subsequent LTM test (delayed recognition) of the same words did show a benefit of deeper processing. Experiment 2 showed that surprise immediate recall of 8-item lists did demonstrate a levels-of-processing effect, however. A processing account of the conditions in which levels-of-processing effects are and are not found in WM tasks was advanced, suggesting that the extent to which levels-of-processing effects are similar between WM and LTM tests largely depends on the amount of disruption to active maintenance processes. 2012 APA, all rights reserved

  20. Emotional words can be embodied or disembodied: the role of superficial vs. deep types of processing

    PubMed Central

    Abbassi, Ensie; Blanchette, Isabelle; Ansaldo, Ana I.; Ghassemzadeh, Habib; Joanette, Yves

    2015-01-01

    Emotional words are processed rapidly and automatically in the left hemisphere (LH) and slowly, with the involvement of attention, in the right hemisphere (RH). This review aims to find the reason for this difference and suggests that emotional words can be processed superficially or deeply due to the involvement of the linguistic and imagery systems, respectively. During superficial processing, emotional words likely make connections only with semantically associated words in the LH. This part of the process is automatic and may be sufficient for the purpose of language processing. Deep processing, in contrast, seems to involve conceptual information and imagery of a word’s perceptual and emotional properties using autobiographical memory contents. Imagery and the involvement of autobiographical memory likely differentiate between emotional and neutral word processing and explain the salient role of the RH in emotional word processing. It is concluded that the level of emotional word processing in the RH should be deeper than in the LH and, thus, it is conceivable that the slow mode of processing adds certain qualities to the output. PMID:26217288

  1. Process Monitoring Evaluation and Implementation for the Wood Abrasive Machining Process

    PubMed Central

    Saloni, Daniel E.; Lemaster, Richard L.; Jackson, Steven D.

    2010-01-01

    Wood processing industries have continuously developed and improved technologies and processes to transform wood to obtain better final product quality and thus increase profits. Abrasive machining is one of the most important of these processes and therefore merits special attention and study. The objective of this work was to evaluate and demonstrate a process monitoring system for use in the abrasive machining of wood and wood based products. The system developed increases the life of the belt by detecting (using process monitoring sensors) and removing (by cleaning) the abrasive loading during the machining process. This study focused on abrasive belt machining processes and included substantial background work, which provided a solid base for understanding the behavior of the abrasive, and the different ways that the abrasive machining process can be monitored. In addition, the background research showed that abrasive belts can effectively be cleaned by the appropriate cleaning technique. The process monitoring system developed included acoustic emission sensors which tended to be sensitive to belt wear, as well as platen vibration, but not loading, and optical sensors which were sensitive to abrasive loading. PMID:22163477

  2. Adaptive memory: determining the proximate mechanisms responsible for the memorial advantages of survival processing.

    PubMed

    Burns, Daniel J; Burns, Sarah A; Hwang, Ana J

    2011-01-01

    J. S. Nairne, S. R. Thompson, and J. N. S. Pandeirada (2007) suggested that our memory systems may have evolved to help us remember fitness-relevant information and showed that retention of words rated for their relevance to survival is superior to that of words encoded under other deep processing conditions. The authors present 4 experiments that uncover the proximate mechanisms likely responsible. The authors obtained a recall advantage for survival processing compared with conditions that promoted only item-specific processing or only relational processing. This effect was eliminated when control conditions encouraged both item-specific and relational processing. Data from separate measures of item-specific and relational processing generally were consistent with the view that the memorial advantage for survival processing results from the encoding of both types of processing. Although the present study suggests the proximate mechanisms for the effect, the authors argue that survival processing may be fundamentally different from other memory phenomena for which item-specific and relational processing differences have been implicated. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  3. Implementation of quality by design toward processing of food products.

    PubMed

    Rathore, Anurag S; Kapoor, Gautam

    2017-05-28

    Quality by design (QbD) is a systematic approach that begins with predefined objectives and emphasizes product and process understanding and process control. It is an approach based on principles of sound science and quality risk management. As the food processing industry continues to embrace the idea of in-line, online, and/or at-line sensors and real-time characterization for process monitoring and control, the existing gaps with regard to our ability to monitor multiple parameters/variables associated with the manufacturing process will be alleviated over time. Investments made for development of tools and approaches that facilitate high-throughput analytical and process development, process analytical technology, design of experiments, risk analysis, knowledge management, and enhancement of process/product understanding would pave way for operational and economic benefits later in the commercialization process and across other product pipelines. This article aims to achieve two major objectives. First, to review the progress that has been made in the recent years on the topic of QbD implementation in processing of food products and second, present a case study that illustrates benefits of such QbD implementation.

  4. Energy saving processes for nitrogen removal in organic wastewater from food processing industries in Thailand.

    PubMed

    Johansen, N H; Suksawad, N; Balslev, P

    2004-01-01

    Nitrogen removal from organic wastewater is becoming a demand in developed communities. The use of nitrite as intermediate in the treatment of wastewater has been largely ignored, but is actually a relevant energy saving process compared to conventional nitrification/denitrification using nitrate as intermediate. Full-scale results and pilot-scale results using this process are presented. The process needs some additional process considerations and process control to be utilized. Especially under tropical conditions the nitritation process will round easily, and it must be expected that many AS treatment plants in the food industry already produce NO2-N. This uncontrolled nitrogen conversion can be the main cause for sludge bulking problems. It is expected that sludge bulking problems in many cases can be solved just by changing the process control in order to run a more consequent nitritation. Theoretically this process will decrease the oxygen consumption for oxidation by 25% and the use of carbon source for the reduction will be decreased by 40% compared to the conventional process.

  5. Application of Ozone MBBR Process in Refinery Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Lin, Wang

    2018-01-01

    Moving Bed Biofilm Reactor (MBBR) is a kind of sewage treatment technology based on fluidized bed. At the same time, it can also be regarded as an efficient new reactor between active sludge method and the biological membrane method. The application of ozone MBBR process in refinery wastewater treatment is mainly studied. The key point is to design the ozone +MBBR combined process based on MBBR process. The ozone +MBBR process is used to analyze the treatment of concentrated water COD discharged from the refinery wastewater treatment plant. The experimental results show that the average removal rate of COD is 46.0%~67.3% in the treatment of reverse osmosis concentrated water by ozone MBBR process, and the effluent can meet the relevant standard requirements. Compared with the traditional process, the ozone MBBR process is more flexible. The investment of this process is mainly ozone generator, blower and so on. The prices of these items are relatively inexpensive, and these costs can be offset by the excess investment in traditional activated sludge processes. At the same time, ozone MBBR process has obvious advantages in water quality, stability and other aspects.

  6. Models of recognition: A review of arguments in favor of a dual-process account

    PubMed Central

    DIANA, RACHEL A.; REDER, LYNNE M.; ARNDT, JASON; PARK, HEEKYEONG

    2008-01-01

    The majority of computationally specified models of recognition memory have been based on a single-process interpretation, claiming that familiarity is the only influence on recognition. There is increasing evidence that recognition is, in fact, based on two processes: recollection and familiarity. This article reviews the current state of the evidence for dual-process models, including the usefulness of the remember/know paradigm, and interprets the relevant results in terms of the source of activation confusion (SAC) model of memory. We argue that the evidence from each of the areas we discuss, when combined, presents a strong case that inclusion of a recollection process is necessary. Given this conclusion, we also argue that the dual-process claim that the recollection process is always available is, in fact, more parsimonious than the single-process claim that the recollection process is used only in certain paradigms. The value of a well-specified process model such as the SAC model is discussed with regard to other types of dual-process models. PMID:16724763

  7. Emotional words can be embodied or disembodied: the role of superficial vs. deep types of processing.

    PubMed

    Abbassi, Ensie; Blanchette, Isabelle; Ansaldo, Ana I; Ghassemzadeh, Habib; Joanette, Yves

    2015-01-01

    Emotional words are processed rapidly and automatically in the left hemisphere (LH) and slowly, with the involvement of attention, in the right hemisphere (RH). This review aims to find the reason for this difference and suggests that emotional words can be processed superficially or deeply due to the involvement of the linguistic and imagery systems, respectively. During superficial processing, emotional words likely make connections only with semantically associated words in the LH. This part of the process is automatic and may be sufficient for the purpose of language processing. Deep processing, in contrast, seems to involve conceptual information and imagery of a word's perceptual and emotional properties using autobiographical memory contents. Imagery and the involvement of autobiographical memory likely differentiate between emotional and neutral word processing and explain the salient role of the RH in emotional word processing. It is concluded that the level of emotional word processing in the RH should be deeper than in the LH and, thus, it is conceivable that the slow mode of processing adds certain qualities to the output.

  8. Techno-economic analysis of biocatalytic processes for production of alkene expoxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borole, Abhijeet P

    2007-01-01

    A techno-economic analysis of two different bioprocesses was conducted, one for the conversion of propylene to propylene oxide (PO) and other for conversion of styrene to styrene expoxide (SO). The first process was a lipase-mediated chemo-enzymatic reaction, whereas the second one was a one-step enzymatic process using chloroperoxidase. The PO produced through the chemo-enzymatic process is a racemic product, whereas the latter process (based on chloroperoxidase) produces an enantio-pure product. The former process thus falls under the category of high-volume commodity chemical (PO); whereas the latter is a low-volume, high-value product (SO).A simulation of the process was conducted using themore » bioprocess engineering software SuperPro Designer v6.0 (Intelligen, Inc., Scotch Plains, NJ) to determine the economic feasibility of the process. The purpose of the exercise was to compare biocatalytic processes with existing chemical processes for production of alkene expoxides. The results show that further improvements are needed in improving biocatalyst stability to make these bioprocesses competitive with chemical processes.« less

  9. The representation of conceptual knowledge: visual, auditory, and olfactory imagery compared with semantic processing.

    PubMed

    Palmiero, Massimiliano; Di Matteo, Rosalia; Belardinelli, Marta Olivetti

    2014-05-01

    Two experiments comparing imaginative processing in different modalities and semantic processing were carried out to investigate the issue of whether conceptual knowledge can be represented in different format. Participants were asked to judge the similarity between visual images, auditory images, and olfactory images in the imaginative block, if two items belonged to the same category in the semantic block. Items were verbally cued in both experiments. The degree of similarity between the imaginative and semantic items was changed across experiments. Experiment 1 showed that the semantic processing was faster than the visual and the auditory imaginative processing, whereas no differentiation was possible between the semantic processing and the olfactory imaginative processing. Experiment 2 revealed that only the visual imaginative processing could be differentiated from the semantic processing in terms of accuracy. These results showed that the visual and auditory imaginative processing can be differentiated from the semantic processing, although both visual and auditory images strongly rely on semantic representations. On the contrary, no differentiation is possible within the olfactory domain. Results are discussed in the frame of the imagery debate.

  10. Working memory load eliminates the survival processing effect.

    PubMed

    Kroneisen, Meike; Rummel, Jan; Erdfelder, Edgar

    2014-01-01

    In a series of experiments, Nairne, Thompson, and Pandeirada (2007) demonstrated that words judged for their relevance to a survival scenario are remembered better than words judged for a scenario not relevant on a survival dimension. They explained this survival-processing effect by arguing that nature "tuned" our memory systems to process and remember fitness-relevant information. Kroneisen and Erdfelder (2011) proposed that it may not be survival processing per se that facilitates recall but the richness and distinctiveness with which information is encoded. To further test this account, we investigated how the survival processing effect is affected by cognitive load. If the survival processing effect is due to automatic processes or, alternatively, if survival processing is routinely prioritized in dual-task contexts, we would expect this effect to persist under cognitive load conditions. If the effect relies on cognitively demanding processes like richness and distinctiveness of encoding, however, the survival processing benefit should be hampered by increased cognitive load during encoding. Results were in line with the latter prediction, that is, the survival processing effect vanished under dual-task conditions.

  11. E-learning process maturity level: a conceptual framework

    NASA Astrophysics Data System (ADS)

    Rahmah, A.; Santoso, H. B.; Hasibuan, Z. A.

    2018-03-01

    ICT advancement is a sure thing with the impact influencing many domains, including learning in both formal and informal situations. It leads to a new mindset that we should not only utilize the given ICT to support the learning process, but also improve it gradually involving a lot of factors. These phenomenon is called e-learning process evolution. Accordingly, this study attempts to explore maturity level concept to provide the improvement direction gradually and progression monitoring for the individual e-learning process. Extensive literature review, observation, and forming constructs are conducted to develop a conceptual framework for e-learning process maturity level. The conceptual framework consists of learner, e-learning process, continuous improvement, evolution of e-learning process, technology, and learning objectives. Whilst, evolution of e-learning process depicted as current versus expected conditions of e-learning process maturity level. The study concludes that from the e-learning process maturity level conceptual framework, it may guide the evolution roadmap for e-learning process, accelerate the evolution, and decrease the negative impact of ICT. The conceptual framework will be verified and tested in the future study.

  12. Heat input and accumulation for ultrashort pulse processing with high average power

    NASA Astrophysics Data System (ADS)

    Finger, Johannes; Bornschlegel, Benedikt; Reininghaus, Martin; Dohrn, Andreas; Nießen, Markus; Gillner, Arnold; Poprawe, Reinhart

    2018-05-01

    Materials processing using ultrashort pulsed laser radiation with pulse durations <10 ps is known to enable very precise processing with negligible thermal load. However, even for the application of picosecond and femtosecond laser radiation, not the full amount of the absorbed energy is converted into ablation products and a distinct fraction of the absorbed energy remains as residual heat in the processed workpiece. For low average power and power densities, this heat is usually not relevant for the processing results and dissipates into the workpiece. In contrast, when higher average powers and repetition rates are applied to increase the throughput and upscale ultrashort pulse processing, this heat input becomes relevant and significantly affects the achieved processing results. In this paper, we outline the relevance of heat input for ultrashort pulse processing, starting with the heat input of a single ultrashort laser pulse. Heat accumulation during ultrashort pulse processing with high repetition rate is discussed as well as heat accumulation for materials processing using pulse bursts. In addition, the relevance of heat accumulation with multiple scanning passes and processing with multiple laser spots is shown.

  13. Defining and reconstructing clinical processes based on IHE and BPMN 2.0.

    PubMed

    Strasser, Melanie; Pfeifer, Franz; Helm, Emmanuel; Schuler, Andreas; Altmann, Josef

    2011-01-01

    This paper describes the current status and the results of our process management system for defining and reconstructing clinical care processes, which contributes to compare, analyze and evaluate clinical processes and further to identify high cost tasks or stays. The system is founded on IHE, which guarantees standardized interfaces and interoperability between clinical information systems. At the heart of the system there is BPMN, a modeling notation and specification language, which allows the definition and execution of clinical processes. The system provides functionality to define healthcare information system independent clinical core processes and to execute the processes in a workflow engine. Furthermore, the reconstruction of clinical processes is done by evaluating an IHE audit log database, which records patient movements within a health care facility. The main goal of the system is to assist hospital operators and clinical process managers to detect discrepancies between defined and actual clinical processes and as well to identify main causes of high medical costs. Beyond that, the system can potentially contribute to reconstruct and improve clinical processes and enhance cost control and patient care quality.

  14. Process qualification and testing of LENS deposited AY1E0125 D-bottle brackets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atwood, Clinton J.; Smugeresky, John E.; Jew, Michael

    2006-11-01

    The LENS Qualification team had the goal of performing a process qualification for the Laser Engineered Net Shaping{trademark}(LENS{reg_sign}) process. Process Qualification requires that a part be selected for process demonstration. The AY1E0125 D-Bottle Bracket from the W80-3 was selected for this work. The repeatability of the LENS process was baselined to determine process parameters. Six D-Bottle brackets were deposited using LENS, machined to final dimensions, and tested in comparison to conventionally processed brackets. The tests, taken from ES1E0003, included a mass analysis and structural dynamic testing including free-free and assembly-level modal tests, and Haversine shock tests. The LENS brackets performedmore » with very similar characteristics to the conventionally processed brackets. Based on the results of the testing, it was concluded that the performance of the brackets made them eligible for parallel path testing in subsystem level tests. The testing results and process rigor qualified the LENS process as detailed in EER200638525A.« less

  15. Sustainability assessment of shielded metal arc welding (SMAW) process

    NASA Astrophysics Data System (ADS)

    Alkahla, Ibrahim; Pervaiz, Salman

    2017-09-01

    Shielded metal arc welding (SMAW) process is one of the most commonly employed material joining processes utilized in the various industrial sectors such as marine, ship-building, automotive, aerospace, construction and petrochemicals etc. The increasing pressure on manufacturing sector wants the welding process to be sustainable in nature. The SMAW process incorporates several types of inputs and output streams. The sustainability concerns associated with SMAW process are linked with the various input and output streams such as electrical energy requirement, input material consumptions, slag formation, fumes emission and hazardous working conditions associated with the human health and occupational safety. To enhance the environmental performance of the SMAW welding process, there is a need to characterize the sustainability for the SMAW process under the broad framework of sustainability. Most of the available literature focuses on the technical and economic aspects of the welding process, however the environmental and social aspects are rarely addressed. The study reviews SMAW process with respect to the triple bottom line (economic, environmental and social) sustainability approach. Finally, the study concluded recommendations towards achieving economical and sustainable SMAW welding process.

  16. Decontamination and disposal of PCB wastes.

    PubMed Central

    Johnston, L E

    1985-01-01

    Decontamination and disposal processes for PCB wastes are reviewed. Processes are classed as incineration, chemical reaction or decontamination. Incineration technologies are not limited to the rigorous high temperature but include those where innovations in use of oxident, heat transfer and residue recycle are made. Chemical processes include the sodium processes, radiant energy processes and low temperature oxidations. Typical processing rates and associated costs are provided where possible. PMID:3928363

  17. Logistics Control Facility: A Normative Model for Total Asset Visibility in the Air Force Logistics System

    DTIC Science & Technology

    1994-09-01

    IIssue Computers, information systems, and communication systems are being increasingly used in transportation, warehousing, order processing , materials...inventory levels, reduced order processing times, reduced order processing costs, and increased customer satisfaction. While purchasing and transportation...process, the speed in which crders are processed would increase significantly. Lowering the order processing time in turn lowers the lead time, which in

  18. Definition and documentation of engineering processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, G.W.

    1997-11-01

    This tutorial is an extract of a two-day workshop developed under the auspices of the Quality Engineering Department at Sandia National Laboratories. The presentation starts with basic definitions and addresses why processes should be defined and documented. It covers three primary topics: (1) process considerations and rationale, (2) approach to defining and documenting engineering processes, and (3) an IDEFO model of the process for defining engineering processes.

  19. Method for enhanced atomization of liquids

    DOEpatents

    Thompson, Richard E.; White, Jerome R.

    1993-01-01

    In a process for atomizing a slurry or liquid process stream in which a slurry or liquid is passed through a nozzle to provide a primary atomized process stream, an improvement which comprises subjecting the liquid or slurry process stream to microwave energy as the liquid or slurry process stream exits the nozzle, wherein sufficient microwave heating is provided to flash vaporize the primary atomized process stream.

  20. Rethinking a Negative Event: The Affective Impact of Ruminative versus Imagery-Based Processing of Aversive Autobiographical Memories.

    PubMed

    Slofstra, Christien; Eisma, Maarten C; Holmes, Emily A; Bockting, Claudi L H; Nauta, Maaike H

    2017-01-01

    Ruminative (abstract verbal) processing during recall of aversive autobiographical memories may serve to dampen their short-term affective impact. Experimental studies indeed demonstrate that verbal processing of non-autobiographical material and positive autobiographical memories evokes weaker affective responses than imagery-based processing. In the current study, we hypothesized that abstract verbal or concrete verbal processing of an aversive autobiographical memory would result in weaker affective responses than imagery-based processing. The affective impact of abstract verbal versus concrete verbal versus imagery-based processing during recall of an aversive autobiographical memory was investigated in a non-clinical sample ( n  = 99) using both an observational and an experimental design. Observationally, it was examined whether spontaneous use of processing modes (both state and trait measures) was associated with impact of aversive autobiographical memory recall on negative and positive affect. Experimentally, the causal relation between processing modes and affective impact was investigated by manipulating the processing mode during retrieval of the same aversive autobiographical memory. Main findings were that higher levels of trait (but not state) measures of both ruminative and imagery-based processing and depressive symptomatology were positively correlated with higher levels of negative affective impact in the observational part of the study. In the experimental part, no main effect of processing modes on affective impact of autobiographical memories was found. However, a significant moderating effect of depressive symptomatology was found. Only for individuals with low levels of depressive symptomatology, concrete verbal (but not abstract verbal) processing of the aversive autobiographical memory did result in weaker affective responses, compared to imagery-based processing. These results cast doubt on the hypothesis that ruminative processing of aversive autobiographical memories serves to avoid the negative emotions evoked by such memories. Furthermore, findings suggest that depressive symptomatology is associated with the spontaneous use and the affective impact of processing modes during recall of aversive autobiographical memories. Clinical studies are needed that examine the role of processing modes during aversive autobiographical memory recall in depression, including the potential effectiveness of targeting processing modes in therapy.

  1. Active pharmaceutical ingredient (API) production involving continuous processes--a process system engineering (PSE)-assisted design framework.

    PubMed

    Cervera-Padrell, Albert E; Skovby, Tommy; Kiil, Søren; Gani, Rafiqul; Gernaey, Krist V

    2012-10-01

    A systematic framework is proposed for the design of continuous pharmaceutical manufacturing processes. Specifically, the design framework focuses on organic chemistry based, active pharmaceutical ingredient (API) synthetic processes, but could potentially be extended to biocatalytic and fermentation-based products. The method exploits the synergic combination of continuous flow technologies (e.g., microfluidic techniques) and process systems engineering (PSE) methods and tools for faster process design and increased process understanding throughout the whole drug product and process development cycle. The design framework structures the many different and challenging design problems (e.g., solvent selection, reactor design, and design of separation and purification operations), driving the user from the initial drug discovery steps--where process knowledge is very limited--toward the detailed design and analysis. Examples from the literature of PSE methods and tools applied to pharmaceutical process design and novel pharmaceutical production technologies are provided along the text, assisting in the accumulation and interpretation of process knowledge. Different criteria are suggested for the selection of batch and continuous processes so that the whole design results in low capital and operational costs as well as low environmental footprint. The design framework has been applied to the retrofit of an existing batch-wise process used by H. Lundbeck A/S to produce an API: zuclopenthixol. Some of its batch operations were successfully converted into continuous mode, obtaining higher yields that allowed a significant simplification of the whole process. The material and environmental footprint of the process--evaluated through the process mass intensity index, that is, kg of material used per kg of product--was reduced to half of its initial value, with potential for further reduction. The case-study includes reaction steps typically used by the pharmaceutical industry featuring different characteristic reaction times, as well as L-L separation and distillation-based solvent exchange steps, and thus constitutes a good example of how the design framework can be useful to efficiently design novel or already existing API manufacturing processes taking advantage of continuous processes. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. On the facilitative effects of face motion on face recognition and its development

    PubMed Central

    Xiao, Naiqi G.; Perrotta, Steve; Quinn, Paul C.; Wang, Zhe; Sun, Yu-Hao P.; Lee, Kang

    2014-01-01

    For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517

  3. Comparison of property between two Viking Seismic tapes

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Yamada, R.

    2016-12-01

    Tthe restoration work of the seismometer data onboard Viking Lander 2 is still continuing. Originally, the data were processed and archived both in MIT and UTIG separately, and each data is accessible via the Internet today. Their file formats to store the data are different, but both of them are currently readable due to the continuous investigation. However, there is some inconsistency between their data although most of their data are highly consistent. To understand the differences, the knowledge of archiving and off-line processing of spacecraft is required because these differences are caused by the off-line processing.The data processing of spacecraft often requires merge and sort processing of raw data. The merge processing is normally performed to eliminate duplicated data, and the sort processing is performed to fix data order. UTIG did not seem to perform these merge and sort processing. Therefore, the UTIG processed data remain duplication. The MIT processed data did these merge and sort processing, but the raw data sometimes include wrong time tags, and it cannot be fixed strictly after sort processing. Also, the MIT processed data has enough documents to understand metadata, while UTIG data has a brief instruction. Therefore, both of MIT and UTIG data are treated complementary. A better data set can be established using both of them. In this presentation, we would show the method to build a better data set of Viking Lander 2 seismic data.

  4. Holistic processing, contact, and the other-race effect in face recognition.

    PubMed

    Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle

    2014-12-01

    Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Process monitoring and visualization solutions for hot-melt extrusion: a review.

    PubMed

    Saerens, Lien; Vervaet, Chris; Remon, Jean Paul; De Beer, Thomas

    2014-02-01

    Hot-melt extrusion (HME) is applied as a continuous pharmaceutical manufacturing process for the production of a variety of dosage forms and formulations. To ensure the continuity of this process, the quality of the extrudates must be assessed continuously during manufacturing. The objective of this review is to provide an overview and evaluation of the available process analytical techniques which can be applied in hot-melt extrusion. Pharmaceutical extruders are equipped with traditional (univariate) process monitoring tools, observing barrel and die temperatures, throughput, screw speed, torque, drive amperage, melt pressure and melt temperature. The relevance of several spectroscopic process analytical techniques for monitoring and control of pharmaceutical HME has been explored recently. Nevertheless, many other sensors visualizing HME and measuring diverse critical product and process parameters with potential use in pharmaceutical extrusion are available, and were thoroughly studied in polymer extrusion. The implementation of process analytical tools in HME serves two purposes: (1) improving process understanding by monitoring and visualizing the material behaviour and (2) monitoring and analysing critical product and process parameters for process control, allowing to maintain a desired process state and guaranteeing the quality of the end product. This review is the first to provide an evaluation of the process analytical tools applied for pharmaceutical HME monitoring and control, and discusses techniques that have been used in polymer extrusion having potential for monitoring and control of pharmaceutical HME. © 2013 Royal Pharmaceutical Society.

  6. Process for improving metal production in steelmaking processes

    DOEpatents

    Pal, Uday B.; Gazula, Gopala K. M.; Hasham, Ali

    1996-01-01

    A process and apparatus for improving metal production in ironmaking and steelmaking processes is disclosed. The use of an inert metallic conductor in the slag containing crucible and the addition of a transition metal oxide to the slag are the disclosed process improvements.

  7. Materials processing in space: Early experiments

    NASA Technical Reports Server (NTRS)

    Naumann, R. J.; Herring, H. W.

    1980-01-01

    The characteristics of the space environment were reviewed. Potential applications of space processing are discussed and include metallurgical processing, and processing of semiconductor materials. The behavior of fluid in low gravity is described. The evolution of apparatus for materials processing in space was reviewed.

  8. Abhijit Dutta | NREL

    Science.gov Websites

    Techno-economic analysis Process model development for existing and conceptual processes Detailed heat integration Economic analysis of integrated processes Integration of process simulation learnings into control ;Conceptual Process Design and Techno-Economic Assessment of Ex Situ Catalytic Fast Pyrolysis of Biomass: A

  9. 7 CFR 52.806 - Color.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... PROCESSED FRUITS AND VEGETABLES, PROCESSED PRODUCTS THEREOF, AND CERTAIN OTHER PROCESSED FOOD PRODUCTS 1... cherries that vary markedly from this color due to oxidation, improper processing, or other causes, or that... to oxidation, improper processing, or other causes, or that are undercolored, does not exceed the...

  10. 7 CFR 52.806 - Color.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... PROCESSED FRUITS AND VEGETABLES, PROCESSED PRODUCTS THEREOF, AND CERTAIN OTHER PROCESSED FOOD PRODUCTS 1... cherries that vary markedly from this color due to oxidation, improper processing, or other causes, or that... to oxidation, improper processing, or other causes, or that are undercolored, does not exceed the...

  11. Meat Processing.

    ERIC Educational Resources Information Center

    Legacy, Jim; And Others

    This publication provides an introduction to meat processing for adult students in vocational and technical education programs. Organized in four chapters, the booklet provides a brief overview of the meat processing industry and the techniques of meat processing and butchering. The first chapter introduces the meat processing industry and…

  12. 40 CFR 60.2025 - What if my chemical recovery unit is not listed in § 60.2020(n)?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... materials that are recovered. (3) A description (including a process flow diagram) of the process in which... process. (4) A description (including a process flow diagram) of the chemical constituent recovery process...

  13. 40 CFR 60.2025 - What if my chemical recovery unit is not listed in § 60.2020(n)?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... materials that are recovered. (3) A description (including a process flow diagram) of the process in which... process. (4) A description (including a process flow diagram) of the chemical constituent recovery process...

  14. 40 CFR 60.2025 - What if my chemical recovery unit is not listed in § 60.2020(n)?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... materials that are recovered. (3) A description (including a process flow diagram) of the process in which... process. (4) A description (including a process flow diagram) of the chemical constituent recovery process...

  15. 40 CFR 60.2558 - What if a chemical recovery unit is not listed in § 60.2555(n)?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... materials that are recovered. (3) A description (including a process flow diagram) of the process in which... process. (4) A description (including a process flow diagram) of the chemical constituent recovery process...

  16. Integrated decontamination process for metals

    DOEpatents

    Snyder, Thomas S.; Whitlow, Graham A.

    1991-01-01

    An integrated process for decontamination of metals, particularly metals that are used in the nuclear energy industry contaminated with radioactive material. The process combines the processes of electrorefining and melt refining to purify metals that can be decontaminated using either electrorefining or melt refining processes.

  17. Case Studies in Continuous Process Improvement

    NASA Technical Reports Server (NTRS)

    Mehta, A.

    1997-01-01

    This study focuses on improving the SMT assembly process in a low-volume, high-reliability environment with emphasis on fine pitch and BGA packages. Before a process improvement is carried out, it is important to evaluate where the process stands in terms of process capability.

  18. Mathematical Model of Nonstationary Separation Processes Proceeding in the Cascade of Gas Centrifuges in the Process of Separation of Multicomponent Isotope Mixtures

    NASA Astrophysics Data System (ADS)

    Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.

    2017-03-01

    We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.

  19. International Best Practices for Pre-Processing and Co-Processing Municipal Solid Waste and Sewage Sludge in the Cement Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasanbeigi, Ali; Lu, Hongyou; Williams, Christopher

    The purpose of this report is to describe international best practices for pre-processing and coprocessing of MSW and sewage sludge in cement plants, for the benefit of countries that wish to develop co-processing capacity. The report is divided into three main sections. Section 2 describes the fundamentals of co-processing, Section 3 describes exemplary international regulatory and institutional frameworks for co-processing, and Section 4 describes international best practices related to the technological aspects of co-processing.

  20. Thermochemical water decomposition processes

    NASA Technical Reports Server (NTRS)

    Chao, R. E.

    1974-01-01

    Thermochemical processes which lead to the production of hydrogen and oxygen from water without the consumption of any other material have a number of advantages when compared to other processes such as water electrolysis. It is possible to operate a sequence of chemical steps with net work requirements equal to zero at temperatures well below the temperature required for water dissociation in a single step. Various types of procedures are discussed, giving attention to halide processes, reverse Deacon processes, iron oxide and carbon oxide processes, and metal and alkali metal processes. Economical questions are also considered.

  1. Voyager image processing at the Image Processing Laboratory

    NASA Astrophysics Data System (ADS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-09-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  2. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  3. A novel process control method for a TT-300 E-Beam/X-Ray system

    NASA Astrophysics Data System (ADS)

    Mittendorfer, Josef; Gallnböck-Wagner, Bernhard

    2018-02-01

    This paper presents some aspects of the process control method for a TT-300 E-Beam/X-Ray system at Mediscan, Austria. The novelty of the approach is the seamless integration of routine monitoring dosimetry with process data. This allows to calculate a parametric dose for each production unit and consequently a fine grain and holistic process performance monitoring. Process performance is documented in process control charts for the analysis of individual runs as well as historic trending of runs of specific process categories over a specified time range.

  4. A minimally processed dietary pattern is associated with lower odds of metabolic syndrome among Lebanese adults.

    PubMed

    Nasreddine, Lara; Tamim, Hani; Itani, Leila; Nasrallah, Mona P; Isma'eel, Hussain; Nakhoul, Nancy F; Abou-Rizk, Joana; Naja, Farah

    2018-01-01

    To (i) estimate the consumption of minimally processed, processed and ultra-processed foods in a sample of Lebanese adults; (ii) explore patterns of intakes of these food groups; and (iii) investigate the association of the derived patterns with cardiometabolic risk. Cross-sectional survey. Data collection included dietary assessment using an FFQ and biochemical, anthropometric and blood pressure measurements. Food items were categorized into twenty-five groups based on the NOVA food classification. The contribution of each food group to total energy intake (TEI) was estimated. Patterns of intakes of these food groups were examined using exploratory factor analysis. Multivariate logistic regression analysis was used to evaluate the associations of derived patterns with cardiometabolic risk factors. Greater Beirut area, Lebanon. Adults ≥18 years (n 302) with no prior history of chronic diseases. Of TEI, 36·53 and 27·10 % were contributed by ultra-processed and minimally processed foods, respectively. Two dietary patterns were identified: the 'ultra-processed' and the 'minimally processed/processed'. The 'ultra-processed' consisted mainly of fast foods, snacks, meat, nuts, sweets and liquor, while the 'minimally processed/processed' consisted mostly of fruits, vegetables, legumes, breads, cheeses, sugar and fats. Participants in the highest quartile of the 'minimally processed/processed' pattern had significantly lower odds for metabolic syndrome (OR=0·18, 95 % CI 0·04, 0·77), hyperglycaemia (OR=0·25, 95 % CI 0·07, 0·98) and low HDL cholesterol (OR=0·17, 95 % CI 0·05, 0·60). The study findings may be used for the development of evidence-based interventions aimed at encouraging the consumption of minimally processed foods.

  5. Increasing patient safety and efficiency in transfusion therapy using formal process definitions.

    PubMed

    Henneman, Elizabeth A; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Andrzejewski, Chester; Merrigan, Karen; Cobleigh, Rachel; Frederick, Kimberly; Katz-Bassett, Ethan; Henneman, Philip L

    2007-01-01

    The administration of blood products is a common, resource-intensive, and potentially problem-prone area that may place patients at elevated risk in the clinical setting. Much of the emphasis in transfusion safety has been targeted toward quality control measures in laboratory settings where blood products are prepared for administration as well as in automation of certain laboratory processes. In contrast, the process of transfusing blood in the clinical setting (ie, at the point of care) has essentially remained unchanged over the past several decades. Many of the currently available methods for improving the quality and safety of blood transfusions in the clinical setting rely on informal process descriptions, such as flow charts and medical algorithms, to describe medical processes. These informal descriptions, although useful in presenting an overview of standard processes, can be ambiguous or incomplete. For example, they often describe only the standard process and leave out how to handle possible failures or exceptions. One alternative to these informal descriptions is to use formal process definitions, which can serve as the basis for a variety of analyses because these formal definitions offer precision in the representation of all possible ways that a process can be carried out in both standard and exceptional situations. Formal process definitions have not previously been used to describe and improve medical processes. The use of such formal definitions to prospectively identify potential error and improve the transfusion process has not previously been reported. The purpose of this article is to introduce the concept of formally defining processes and to describe how formal definitions of blood transfusion processes can be used to detect and correct transfusion process errors in ways not currently possible using existing quality improvement methods.

  6. Chemical interaction matrix between reagents in a Purex based process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brahman, R.K.; Hennessy, W.P.; Paviet-Hartmann, P.

    2008-07-01

    The United States Department of Energy (DOE) is the responsible entity for the disposal of the United States excess weapons grade plutonium. DOE selected a PUREX-based process to convert plutonium to low-enriched mixed oxide fuel for use in commercial nuclear power plants. To initiate this process in the United States, a Mixed Oxide (MOX) Fuel Fabrication Facility (MFFF) is under construction and will be operated by Shaw AREVA MOX Services at the Savannah River Site. This facility will be licensed and regulated by the U.S. Nuclear Regulatory Commission (NRC). A PUREX process, similar to the one used at La Hague,more » France, will purify plutonium feedstock through solvent extraction. MFFF employs two major process operations to manufacture MOX fuel assemblies: (1) the Aqueous Polishing (AP) process to remove gallium and other impurities from plutonium feedstock and (2) the MOX fuel fabrication process (MP), which processes the oxides into pellets and manufactures the MOX fuel assemblies. The AP process consists of three major steps, dissolution, purification, and conversion, and is the center of the primary chemical processing. A study of process hazards controls has been initiated that will provide knowledge and protection against the chemical risks associated from mixing of reagents over the life time of the process. This paper presents a comprehensive chemical interaction matrix evaluation for the reagents used in the PUREX-based process. Chemical interaction matrix supplements the process conditions by providing a checklist of any potential inadvertent chemical reactions that may take place. It also identifies the chemical compatibility/incompatibility of the reagents if mixed by failure of operations or equipment within the process itself or mixed inadvertently by a technician in the laboratories. (aut0010ho.« less

  7. Ultra-processed foods have the worst nutrient profile, yet they are the most available packaged products in a sample of New Zealand supermarkets.

    PubMed

    Luiten, Claire M; Steenhuis, Ingrid Hm; Eyles, Helen; Ni Mhurchu, Cliona; Waterlander, Wilma E

    2016-02-01

    To examine the availability of packaged food products in New Zealand supermarkets by level of industrial processing, nutrient profiling score (NPSC), price (energy, unit and serving costs) and brand variety. Secondary analysis of cross-sectional survey data on packaged supermarket food and non-alcoholic beverages. Products were classified according to level of industrial processing (minimally, culinary and ultra-processed) and their NPSC. Packaged foods available in four major supermarkets in Auckland, New Zealand. Packaged supermarket food products for the years 2011 and 2013. The majority (84% in 2011 and 83% in 2013) of packaged foods were classified as ultra-processed. A significant positive association was found between the level of industrial processing and NPSC, i.e., ultra-processed foods had a worse nutrient profile (NPSC=11.63) than culinary processed foods (NPSC=7.95), which in turn had a worse nutrient profile than minimally processed foods (NPSC=3.27), P<0.001. No clear associations were observed between the three price measures and level of processing. The study observed many variations of virtually the same product. The ten largest food manufacturers produced 35% of all packaged foods available. In New Zealand supermarkets, ultra-processed foods comprise the largest proportion of packaged foods and are less healthy than less processed foods. The lack of significant price difference between ultra- and less processed foods suggests ultra-processed foods might provide time-poor consumers with more value for money. These findings highlight the need to improve the supermarket food supply by reducing numbers of ultra-processed foods and by reformulating products to improve their nutritional profile.

  8. Trends in consumption of ultra-processed foods and obesity in Sweden between 1960 and 2010.

    PubMed

    Juul, Filippa; Hemmingsson, Erik

    2015-12-01

    To investigate how consumption of ultra-processed foods has changed in Sweden in relation to obesity. Nationwide ecological analysis of changes in processed foods along with corresponding changes in obesity. Trends in per capita food consumption during 1960-2010 were investigated using data from the Swedish Board of Agriculture. Food items were classified as group 1 (unprocessed/minimally processed), group 2 (processed culinary ingredients) or group 3 (3·1, processed food products; and 3·2, ultra-processed products). Obesity prevalence data were pooled from the peer-reviewed literature, Statistics Sweden and the WHO Global Health Observatory. Nationwide analysis in Sweden, 1960-2010. Swedish nationals aged 18 years and older. During the study period consumption of group 1 foods (minimal processing) decreased by 2 %, while consumption of group 2 foods (processed ingredients) decreased by 34 %. Consumption of group 3·1 foods (processed food products) increased by 116 % and group 3·2 foods (ultra-processed products) increased by 142 %. Among ultra-processed products, there were particularly large increases in soda (315 %; 22 v. 92 litres/capita per annum) and snack foods such as crisps and candies (367 %; 7 v. 34 kg/capita per annum). In parallel to these changes in ultra-processed products, rates of adult obesity increased from 5 % in 1980 to over 11 % in 2010. The consumption of ultra-processed products (i.e. foods with low nutritional value but high energy density) has increased dramatically in Sweden since 1960, which mirrors the increased prevalence of obesity. Future research should clarify the potential causal role of ultra-processed products in weight gain and obesity.

  9. Differential Phonological and Semantic Modulation of Neurophysiological Responses to Visual Word Recognition.

    PubMed

    Drakesmith, Mark; El-Deredy, Wael; Welbourne, Stephen

    2015-01-01

    Reading words for meaning relies on orthographic, phonological and semantic processing. The triangle model implicates a direct orthography-to-semantics pathway and a phonologically mediated orthography-to-semantics pathway, which interact with each other. The temporal evolution of processing in these routes is not well understood, although theoretical evidence predicts early phonological processing followed by interactive phonological and semantic processing. This study used electroencephalography-event-related potential (ERP) analysis and magnetoencephalography (MEG) source localisation to identify temporal markers and the corresponding neural generators of these processes in early (∼200 ms) and late (∼400 ms) neurophysiological responses to visual words, pseudowords and consonant strings. ERP showed an effect of phonology but not semantics in both time windows, although at ∼400 ms there was an effect of stimulus familiarity. Phonological processing at ~200 ms was localised to the left occipitotemporal cortex and the inferior frontal gyrus. At 400 ms, there was continued phonological processing in the inferior frontal gyrus and additional semantic processing in the anterior temporal cortex. There was also an area in the left temporoparietal junction which was implicated in both phonological and semantic processing. In ERP, the semantic response at ∼400 ms appeared to be masked by concurrent processes relating to familiarity, while MEG successfully differentiated these processes. The results support the prediction of early phonological processing followed by an interaction of phonological and semantic processing during word recognition. Neuroanatomical loci of these processes are consistent with previous neuropsychological and functional magnetic resonance imaging studies. The results also have implications for the classical interpretation of N400-like responses as markers for semantic processing.

  10. Basic abnormalities in visual processing affect face processing at an early age in autism spectrum disorder.

    PubMed

    Vlamings, Petra Hendrika Johanna Maria; Jonkman, Lisa Marthe; van Daalen, Emma; van der Gaag, Rutger Jan; Kemner, Chantal

    2010-12-15

    A detailed visual processing style has been noted in autism spectrum disorder (ASD); this contributes to problems in face processing and has been directly related to abnormal processing of spatial frequencies (SFs). Little is known about the early development of face processing in ASD and the relation with abnormal SF processing. We investigated whether young ASD children show abnormalities in low spatial frequency (LSF, global) and high spatial frequency (HSF, detailed) processing and explored whether these are crucially involved in the early development of face processing. Three- to 4-year-old children with ASD (n = 22) were compared with developmentally delayed children without ASD (n = 17). Spatial frequency processing was studied by recording visual evoked potentials from visual brain areas while children passively viewed gratings (HSF/LSF). In addition, children watched face stimuli with different expressions, filtered to include only HSF or LSF. Enhanced activity in visual brain areas was found in response to HSF versus LSF information in children with ASD, in contrast to control subjects. Furthermore, facial-expression processing was also primarily driven by detail in ASD. Enhanced visual processing of detailed (HSF) information is present early in ASD and occurs for neutral (gratings), as well as for socially relevant stimuli (facial expressions). These data indicate that there is a general abnormality in visual SF processing in early ASD and are in agreement with suggestions that a fast LSF subcortical face processing route might be affected in ASD. This could suggest that abnormal visual processing is causative in the development of social problems in ASD. Copyright © 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  11. [Contention on the theory of processing techniques of Chinese materia medica in the Ming-Qing period].

    PubMed

    Chen, Bin; Jia, Tianzhu

    2015-03-01

    On the basis of the golden stage of development of processing techniques of medicinals in the Song dynasty, the theory and techniques of processing in the Ming-Qing dynasties developed and accomplished further. The knowledge of some physicians on the processing of common medicinal, such as Radix rehmannia and Radixophiopogonis, was questioned, with new idea of processing methods put forward and argued against those insisting traditional ones, marking the progress of the art of processing. By reviewing the contention of technical theory of medicinal processing in the Ming-Qing period, useful references can be provided for the inheritance and development of the traditional art of processing medicinals.

  12. Process Feasibility Study in Support of Silicon Material, Task 1

    NASA Technical Reports Server (NTRS)

    Li, K. Y.; Hansen, K. C.; Yaws, C. L.

    1979-01-01

    During this reporting period, major activies were devoted to process system properties, chemical engineering and economic analyses. Analyses of process system properties was continued for materials involved in the alternate processes under consideration for solar cell grade silicon. The following property data are reported for silicon tetrafluoride: critical constants, vapor pressure, heat of varporization, heat capacity, density, surface tension, viscosity, thermal conductivity, heat of formation and Gibb's free energy of formation. Chemical engineering analysis of the BCL process was continued with primary efforts being devoted to the preliminary process design. Status and progress are reported for base case conditions; process flow diagram; reaction chemistry; material and energy balances; and major process equipment design.

  13. Technology and development requirements for advanced coal conversion systems

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A compendium of coal conversion process descriptions is presented. The SRS and MC data bases were utilized to provide information paticularly in the areas of existing process designs and process evaluations. Additional information requirements were established and arrangements were made to visit process developers, pilot plants, and process development units to obtain information that was not otherwise available. Plant designs, process descriptions and operating conditions, and performance characteristics were analyzed and requirements for further development identified and evaluated to determine the impact of these requirements on the process commercialization potential from the standpoint of economics and technical feasibility. A preliminary methodology was established for the comparative technical and economic assessment of advanced processes.

  14. The s-process in massive stars: the Shell C-burning contribution

    NASA Astrophysics Data System (ADS)

    Pignatari, Marco; Gallino, R.; Baldovin, C.; Wiescher, M.; Herwig, F.; Heger, A.; Heil, M.; Käppeler, F.

    In massive stars the s¡ process (slow neutron capture process) is activated at different tempera- tures, during He¡ burning and during convective shell C¡ burning. At solar metallicity, the neu- tron capture process in the convective C¡ shell adds a substantial contribution to the s¡ process yields made by the previous core He¡ burning, and the final results carry the signature of both processes. With decreasing metallicity, the contribution of the C¡ burning shell to the weak s¡ process rapidly decreases, because of the effect of the primary neutron poisons. On the other hand, also the s¡ process efficiency in the He core decreases with metallicity.

  15. Clean-up and disposal process of polluted sediments from urban rivers.

    PubMed

    He, P J; Shao, L M; Gu, G W; Bian, C L; Xu, C

    2001-10-01

    In this paper, the discussion is concentrated on the properties of the polluted sediments and the combination of clean-up and disposal process for the upper layer heavily polluted sediments with good flowability. Based on the systematic analyses of various clean-up processes, a suitable engineering process has been evaluated and recommended. The process has been applied to the river reclamation in Yangpu District of Shanghai City, China. An improved centrifuge is used for dewatering the dredged sludge, which plays an important role in the combination of clean-up and disposal process. The assessment of the engineering process shows its environmental and technical economy feasibility, which is much better than that of traditional dredging-disposal processes.

  16. Adopting software quality measures for healthcare processes.

    PubMed

    Yildiz, Ozkan; Demirörs, Onur

    2009-01-01

    In this study, we investigated the adoptability of software quality measures for healthcare process measurement. Quality measures of ISO/IEC 9126 are redefined from a process perspective to build a generic healthcare process quality measurement model. Case study research method is used, and the model is applied to a public hospital's Entry to Care process. After the application, weak and strong aspects of the process can be easily observed. Access audibility, fault removal, completeness of documentation, and machine utilization are weak aspects and these aspects are the candidates for process improvement. On the other hand, functional completeness, fault ratio, input validity checking, response time, and throughput time are the strong aspects of the process.

  17. Business process modeling in healthcare.

    PubMed

    Ruiz, Francisco; Garcia, Felix; Calahorra, Luis; Llorente, César; Gonçalves, Luis; Daniel, Christel; Blobel, Bernd

    2012-01-01

    The importance of the process point of view is not restricted to a specific enterprise sector. In the field of health, as a result of the nature of the service offered, health institutions' processes are also the basis for decision making which is focused on achieving their objective of providing quality medical assistance. In this chapter the application of business process modelling - using the Business Process Modelling Notation (BPMN) standard is described. Main challenges of business process modelling in healthcare are the definition of healthcare processes, the multi-disciplinary nature of healthcare, the flexibility and variability of the activities involved in health care processes, the need of interoperability between multiple information systems, and the continuous updating of scientific knowledge in healthcare.

  18. Survey of the US materials processing and manufacturing in space program

    NASA Technical Reports Server (NTRS)

    Mckannan, E. C.

    1981-01-01

    To promote potential commercial applications of low-g technology, the materials processing and manufacturing in space program is structured to: (1) analyze the scientific principles of gravitational effects on processes used in producing materials; (2) apply the research toward the technology used to control production process (on Earth or in space, as appropriate); and (3) establish the legal and managerial framework for commercial ventures. Presently federally funded NASA research is described as well as agreements for privately funded commercial activity, and a proposed academic participation process. The future scope of the program and related capabilities using ground based facilities, aircraft, sounding rockets, and space shuttles are discussed. Areas of interest described include crystal growth; solidification of metals and alloys; containerless processing; fluids and chemical processes (including biological separation processes); and processing extraterrestrial materials.

  19. On the fractal characterization of Paretian Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo I.; Sokolov, Igor M.

    2012-06-01

    Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities.

  20. Mobil process converts methanol to high-quality synthetic gasoline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, A.

    1978-12-11

    If production of gasoline from coal becomes commercially attractive in the United States, a process under development at the Mobil Research and Development Corp. may compete with better known coal liquefaction processes. Mobil process converts methanol to high-octane, unleaded gasoline; methanol can be produced commercially from coal. If gasoline is the desired product, the Mobil process offers strong technical and cost advantages over H-coal, Exxon donor solvent, solvent-refined coal, and Fischer--Tropsch processes. The cost analysis, contained in a report to the Dept. of Energy, concludes that the Mobil process produces more-expensive liquid products than any other liquefaction process except Fischer--Tropsch.more » But Mobil's process produces ready-to-use gasoline, while the others produce oils which require further expensive refining to yield gasoline. Disadvantages and advantages are discussed.« less

Top