Sample records for input field size

  1. Coarse-coded higher-order neural networks for PSRI object recognition. [position, scale, and rotation invariant

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1993-01-01

    A higher-order neural network (HONN) can be designed to be invariant to changes in scale, translation, and inplane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Consequently, fewer training passes and a smaller training set are required to learn to distinguish between objects. The size of the input field is limited, however, because of the memory required for the large number of interconnections in a fully connected HONN. By coarse coding the input image, the input field size can be increased to allow the larger input scenes required for practical object recognition problems. We describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Our simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096 x 4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, we empirically determine the limits of the coarse coding technique in the object recognition domain.

  2. A Feedback Model of Attention Explains the Diverse Effects of Attention on Neural Firing Rates and Receptive Field Structure.

    PubMed

    Miconi, Thomas; VanRullen, Rufin

    2016-02-01

    Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell.

  3. Designing Input Fields for Non-Narrative Open-Ended Responses in Web Surveys

    PubMed Central

    Couper, Mick P.; Kennedy, Courtney; Conrad, Frederick G.; Tourangeau, Roger

    2012-01-01

    Web surveys often collect information such as frequencies, currency amounts, dates, or other items requiring short structured answers in an open-ended format, typically using text boxes for input. We report on several experiments exploring design features of such input fields. We find little effect of the size of the input field on whether frequency or dollar amount answers are well-formed or not. By contrast, the use of templates to guide formatting significantly improves the well-formedness of responses to questions eliciting currency amounts. For date questions (whether month/year or month/day/year), we find that separate input fields improve the quality of responses over single input fields, while drop boxes further reduce the proportion of ill-formed answers. Drop boxes also reduce completion time when the list of responses is short (e.g., months), but marginally increases completion time when the list is long (e.g., birth dates). These results suggest that non-narrative open questions can be designed to help guide respondents to provide answers in the desired format. PMID:23411468

  4. Method and apparatus for wavefront sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bahk, Seung-Whan

    A method for performing optical wavefront sensing includes providing an amplitude transmission mask having a light input side, a light output side, and an optical transmission axis passing from the light input side to the light output side. The amplitude transmission mask is characterized by a checkerboard pattern having a square unit cell of size .LAMBDA.. The method also includes directing an incident light field having a wavelengthmore » $$ \\lamda $$ to be incident on the light input side and propagating the incident light field through the amplitude transmission mask. The method further includes producing a plurality of diffracted light fields on the light output side and detecting, at a detector disposed a distance L from the amplitude transmission mask, an interferogram associated with the plurality of diffracted light fields.« less

  5. Digital 3D holographic display using scattering layers for enhanced viewing angle and image size

    NASA Astrophysics Data System (ADS)

    Yu, Hyeonseung; Lee, KyeoReh; Park, Jongchan; Park, YongKeun

    2017-05-01

    In digital 3D holographic displays, the generation of realistic 3D images has been hindered by limited viewing angle and image size. Here we demonstrate a digital 3D holographic display using volume speckle fields produced by scattering layers in which both the viewing angle and the image size are greatly enhanced. Although volume speckle fields exhibit random distributions, the transmitted speckle fields have a linear and deterministic relationship with the input field. By modulating the incident wavefront with a digital micro-mirror device, volume speckle patterns are controlled to generate 3D images of micrometer-size optical foci with 35° viewing angle in a volume of 2 cm × 2 cm × 2 cm.

  6. Modification of the fault logic circuit of a high-energy linear accelerator to accommodate selectively coded, large-field wedges.

    PubMed

    Miller, R W; van de Geijn, J

    1987-01-01

    A modification to the fault logic circuit that controls the collimator (COLL) fault is described. This modification permits the use of large-field wedges by adding an additional input into the reference voltage that determines the fault condition. The resistor controlling the amount of additional voltage is carried on board each wedge, within the wedge plug. This allows each wedge to determine its own, individual field size limit. Additionally, if no coding resistor is provided, the factory-supplied reference voltage is used, which sets the maximum allowable field size to 15 cm. This permits the use of factory-supplied wedges in conjunction with selected, large-field wedges, allowing proper sensing of the field size maximum in all conditions.

  7. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach

    PubMed Central

    Zeng, Xiaozheng; McGough, Robert J.

    2009-01-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640

  8. Interpretation of the microwave effect on induction time during CaSO4 primary nucleation by a cluster coagulation model

    NASA Astrophysics Data System (ADS)

    Guo, Zhichao; Li, Liye; Han, Wenxiang; Li, Jiawei; Wang, Baodong; Xiao, Yongfeng

    2017-10-01

    The effects of microwave on the induction time of CaSO4 are studied experimentally and theoretically. In the experiments, calcium sulfate is precipitated by mixing aqueous CaCl2 solution and Na2SO4 solution. The induction time is measured by recording the change of turbidity in solution. Various energy inputs are used to investigate the effect of energy input on nucleation. The results show that the induction time decreases with increasing supersaturation and increasing energy input. Employing the classical nucleation theory, the interfacial tension is estimated. In addition, the microwave effects on nucleation order (n) and nucleation coefficient (kN) are also investigated, and the corresponding values of homogeneous nucleation are compared with the values of heterogeneous nucleation in the microwave field. A cluster coagulation model, which brings together the classic nucleation models and the theories describing the behavior of colloidal suspension, was applied to estimate the induction time under various energy inputs. It is found that when nucleation is prominently homogeneous, the microwave energy input does not change the number of monomers in dominating clusters. And when nucleation is prominently heterogeneous, although the dominating cluster size increases with supersaturation increasing, at the same supersaturation level, the dominating cluster size remains constant in the microwave field.

  9. The Influence of Objects on Place Field Expression and Size in Distal Hippocampal CA1

    PubMed Central

    Burke, S.N.; Maurer, A.P.; Nematollahi, S.; Uprety, A.R.; Wallace, J.L.; Barnes, C.A.

    2012-01-01

    The perirhinal and lateral entorhinal cortices send prominent projections to the portion of the hippocampal CA1 subfield closest to the subiculum, but relatively little is known regarding the contributions of these cortical areas to hippocampal activity patterns. The anatomical connections of the lateral entorhinal and perirhinal cortices, as well as lesion data, suggest that these brain regions may contribute to the perception of complex stimuli such as objects. The current experiments investigated the degree to which 3-dimensional objects affect place field size and activity within the distal region (closest to the subiculum) of CA1. The activity of CA1 pyramidal cells was monitored as rats traversed a circular track that contained no objects in some conditions and 3-dimensial objects in other conditions. In the area of CA1 that receives direct lateral entorhinal input, three factors differentiated the objects-on-track conditions from the no-object conditions: more pyramidal cells expressed place fields when objects were present, adding or removing objects from the environment led to partial remapping in CA1, and the size of place fields decreased when objects were present. Additionally, a proportion of place fields remapped under conditions in which the object locations were shuffled, which suggests that at least some of the CA1 neurons’ firing patterns were sensitive to a particular object in a particular location. Together, these data suggest that the activity characteristics of neurons in the areas of CA1 receiving direct input from the perirhinal and lateral entorhinal cortices are modulated by non-spatial sensory input such as 3-dimensional objects. PMID:21365714

  10. PFGE MAPPER and PFGE READER: two tools to aid in the analysis and data input of pulse field gel electrophoresis maps.

    PubMed Central

    Shifman, M. A.; Nadkarni, P.; Miller, P. L.

    1992-01-01

    Pulse field gel electrophoresis mapping is an important technique for characterizing large segments of DNA. We have developed two tools to aid in the construction of pulse field electrophoresis gel maps: PFGE READER which stores experimental conditions and calculates fragment sizes and PFGE MAPPER which constructs pulse field gel electrophoresis maps. PMID:1482898

  11. Colloidal Bandpass and Bandgap Filters

    NASA Astrophysics Data System (ADS)

    Yellen, Benjamin; Tahir, Mukarram; Ouyang, Yuyu; Nori, Franco

    2013-03-01

    Thermally or deterministically-driven transport of objects through asymmetric potential energy landscapes (ratchet-based motion) is of considerable interest as models for biological transport and as methods for controlling the flow of information, material, and energy. Here, we provide a general framework for implementing a colloidal bandpass filter, in which particles of a specific size range can be selectively transported through a periodic lattice, whereas larger or smaller particles are dynamically trapped in closed-orbits. Our approach is based on quasi-static (adiabatic) transition in a tunable potential energy landscape composed of a multi-frequency magnetic field input signal with the static field of a spatially-periodic magnetization. By tuning the phase shifts between the input signal and the relative forcing coefficients, large-sized particles may experience no local energy barriers, medium-sized particles experience only one local energy barrier, and small-sized particles experience two local energy barriers. The odd symmetry present in this system can be used to nudge the medium-sized particles along an open pathway, whereas the large or small beads remain trapped in a closed-orbit, leading to a bandpass filter, and vice versa for a bandgap filter. NSF CMMI - 0800173, Youth 100 Scholars Fund

  12. Extended Characterization of the Common-Source and Common-Gate Amplifiers using a Metal-Ferroelectric-Semiconductor Field Effect Transistor

    NASA Technical Reports Server (NTRS)

    Hunt, Mitchell; Sayyah, Rana; Mitchell, Cody; Laws, Crystal; MacLeod, Todd C.; Ho, Fat D.

    2013-01-01

    Collected data for both common-source and common-gate amplifiers is presented in this paper. Characterizations of the two amplifier circuits using metal-ferroelectric-semiconductor field effect transistors (MFSFETs) are developed with wider input frequency ranges and varying device sizes compared to earlier characterizations. The effects of the ferroelectric layer's capacitance and variation load, quiescent point, or input signal on each circuit are discussed. Comparisons between the MFSFET and MOSFET circuit operation and performance are discussed at length as well as applications and advantages for the MFSFETs.

  13. Hill Ciphers over Near-Fields

    ERIC Educational Resources Information Center

    Farag, Mark

    2007-01-01

    Hill ciphers are linear codes that use as input a "plaintext" vector [p-right arrow above] of size n, which is encrypted with an invertible n x n matrix E to produce a "ciphertext" vector [c-right arrow above] = E [middle dot] [p-right arrow above]. Informally, a near-field is a triple [left angle bracket]N; +, *[right angle bracket] that…

  14. Field-scale experiments reveal persistent yield gaps in low-input and organic cropping systems

    PubMed Central

    Kravchenko, Alexandra N.; Snapp, Sieglinde S.; Robertson, G. Philip

    2017-01-01

    Knowledge of production-system performance is largely based on observations at the experimental plot scale. Although yield gaps between plot-scale and field-scale research are widely acknowledged, their extent and persistence have not been experimentally examined in a systematic manner. At a site in southwest Michigan, we conducted a 6-y experiment to test the accuracy with which plot-scale crop-yield results can inform field-scale conclusions. We compared conventional versus alternative, that is, reduced-input and biologically based–organic, management practices for a corn–soybean–wheat rotation in a randomized complete block-design experiment, using 27 commercial-size agricultural fields. Nearby plot-scale experiments (0.02-ha to 1.0-ha plots) provided a comparison of plot versus field performance. We found that plot-scale yields well matched field-scale yields for conventional management but not for alternative systems. For all three crops, at the plot scale, reduced-input and conventional managements produced similar yields; at the field scale, reduced-input yields were lower than conventional. For soybeans at the plot scale, biological and conventional managements produced similar yields; at the field scale, biological yielded less than conventional. For corn, biological management produced lower yields than conventional in both plot- and field-scale experiments. Wheat yields appeared to be less affected by the experimental scale than corn and soybean. Conventional management was more resilient to field-scale challenges than alternative practices, which were more dependent on timely management interventions; in particular, mechanical weed control. Results underscore the need for much wider adoption of field-scale experimentation when assessing new technologies and production-system performance, especially as related to closing yield gaps in organic farming and in low-resourced systems typical of much of the developing world. PMID:28096409

  15. Field-scale experiments reveal persistent yield gaps in low-input and organic cropping systems.

    PubMed

    Kravchenko, Alexandra N; Snapp, Sieglinde S; Robertson, G Philip

    2017-01-31

    Knowledge of production-system performance is largely based on observations at the experimental plot scale. Although yield gaps between plot-scale and field-scale research are widely acknowledged, their extent and persistence have not been experimentally examined in a systematic manner. At a site in southwest Michigan, we conducted a 6-y experiment to test the accuracy with which plot-scale crop-yield results can inform field-scale conclusions. We compared conventional versus alternative, that is, reduced-input and biologically based-organic, management practices for a corn-soybean-wheat rotation in a randomized complete block-design experiment, using 27 commercial-size agricultural fields. Nearby plot-scale experiments (0.02-ha to 1.0-ha plots) provided a comparison of plot versus field performance. We found that plot-scale yields well matched field-scale yields for conventional management but not for alternative systems. For all three crops, at the plot scale, reduced-input and conventional managements produced similar yields; at the field scale, reduced-input yields were lower than conventional. For soybeans at the plot scale, biological and conventional managements produced similar yields; at the field scale, biological yielded less than conventional. For corn, biological management produced lower yields than conventional in both plot- and field-scale experiments. Wheat yields appeared to be less affected by the experimental scale than corn and soybean. Conventional management was more resilient to field-scale challenges than alternative practices, which were more dependent on timely management interventions; in particular, mechanical weed control. Results underscore the need for much wider adoption of field-scale experimentation when assessing new technologies and production-system performance, especially as related to closing yield gaps in organic farming and in low-resourced systems typical of much of the developing world.

  16. The role of size of input box, location of input box, input method and display size in Chinese handwriting performance and preference on mobile devices.

    PubMed

    Chen, Zhe; Rau, Pei-Luen Patrick

    2017-03-01

    This study presented two experiments on Chinese handwriting performance (time, accuracy, the number of protruding strokes and number of rewritings) and subjective ratings (mental workload, satisfaction, and preference) on mobile devices. Experiment 1 evaluated the effects of size of the input box, input method and display size on Chinese handwriting performance and preference. It was indicated that the optimal input sizes were 30.8 × 30.8 mm, 46.6 × 46.6 mm, 58.9 × 58.9 mm and 84.6 × 84.6 mm for devices with 3.5-inch, 5.5-inch, 7.0-inch and 9.7-inch display sizes, respectively. Experiment 2 proved the significant effects of location of the input box, input method and display size on Chinese handwriting performance and subjective ratings. It was suggested that the optimal location was central regardless of display size and input method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. A Radio-frequency Coupling Network for Heating of Citrate-coated Gold Nanoparticles for Cancer Therapy: Design and Analysis

    PubMed Central

    Kruse, Dustin E.; Stephens, Douglas N.; Lindfors, Heather A.; Ingham, Elizabeth S.; Paoli, Eric E.; Ferrara, Katherine W.

    2012-01-01

    Gold nanoparticles (GNPs) are non-toxic, can be functionalized with ligands, and preferentially accumulate in tumors. We have developed a 13.56 MHz radiofrequency-electromagnetic field (RF-EM) delivery system capable of generating high electric field strengths required for non-invasive, non-contact heating of GNPs. The bulk heating and specific heating rates were measured as a function of NP size and concentration. It was found that heating is both size and concentration dependent, with 5 nm particles producing a 50.6±0.2°C temperature rise in 30 s for 25 μg/mL gold (125 W input). The specific heating rate was also size and concentration dependent, with 5 nm particles producing a specific heating rate of 356±78 kW/g gold at 16 μg/mL (125 W input). Furthermore, we demonstrate that cancer cells incubated with GNPs are killed when exposed to 13.56 MHz RFEM fields. Compared to cells that were not incubated with GNPs, 3 out of 4 RF-treated groups showed a significant enhancement of cell death with GNPs (p<0.05). GNP-enhanced cell killing appears to require temperatures above 50°C for the experimental parameters used in this study. Transmission electron micrographs show extensive vacuolization with the combination of GNPs and RF treatment. PMID:21402506

  18. Update on the recommended viewing protocol for FAXIL threshold contrast detail detectability test objects used in television fluoroscopy.

    PubMed

    Launders, J H; McArdle, S; Workman, A; Cowen, A R

    1995-01-01

    The significance of varying the viewing conditions that may affect the perceived threshold contrast of X-ray television fluoroscopy systems has been investigated. Factors investigated include the ambient room lighting and the viewing distance. The purpose of this study is to find the optimum viewing protocol with which to measure the threshold detection index. This is a particular problem when trying to compare the image quality of television fluoroscopy systems in different input field sizes. The results show that the viewing distance makes a significant difference to the perceived threshold contrast, whereas the ambient light conditions make no significant difference. Experienced observers were found to be capable of finding the optimum viewing distance for detecting details of each size, in effect using a flexible viewing distance. This allows the results from different field sizes to be normalized to account for both the magnification and the entrance air kerma rate differences, which in turn allow for a direct comparison of performance in different field sizes.

  19. A Comprehensive Model of Electric-Field-Enhanced Jumping-Droplet Condensation on Superhydrophobic Surfaces.

    PubMed

    Birbarah, Patrick; Li, Zhaoer; Pauls, Alexander; Miljkovic, Nenad

    2015-07-21

    Superhydrophobic micro/nanostructured surfaces for dropwise condensation have recently received significant attention due to their potential to enhance heat transfer performance by shedding positively charged water droplets via coalescence-induced droplet jumping at length scales below the capillary length and allowing the use of external electric fields to enhance droplet removal and heat transfer, in what has been termed electric-field-enhanced (EFE) jumping-droplet condensation. However, achieving optimal EFE conditions for enhanced heat transfer requires capturing the details of transport processes that is currently lacking. While a comprehensive model has been developed for condensation on micro/nanostructured surfaces, it cannot be applied for EFE condensation due to the dynamic droplet-vapor-electric field interactions. In this work, we developed a comprehensive physical model for EFE condensation on superhydrophobic surfaces by incorporating individual droplet motion, electrode geometry, jumping frequency, field strength, and condensate vapor-flow dynamics. As a first step toward our model, we simulated jumping droplet motion with no external electric field and validated our theoretical droplet trajectories to experimentally obtained trajectories, showing excellent temporal and spatial agreement. We then incorporated the external electric field into our model and considered the effects of jumping droplet size, electrode size and geometry, condensation heat flux, and droplet jumping direction. Our model suggests that smaller jumping droplet sizes and condensation heat fluxes require less work input to be removed by the external fields. Furthermore, the results suggest that EFE electrodes can be optimized such that the work input is minimized depending on the condensation heat flux. To analyze overall efficiency, we defined an incremental coefficient of performance and showed that it is very high (∼10(6)) for EFE condensation. We finally proposed mechanisms for condensate collection which would ensure continuous operation of the EFE system and which can scalably be applied to industrial condensers. This work provides a comprehensive physical model of the EFE condensation process and offers guidelines for the design of EFE systems to maximize heat transfer.

  20. Graphical User Interface for the NASA FLOPS Aircraft Performance and Sizing Code

    NASA Technical Reports Server (NTRS)

    Lavelle, Thomas M.; Curlett, Brian P.

    1994-01-01

    XFLOPS is an X-Windows/Motif graphical user interface for the aircraft performance and sizing code FLOPS. This new interface simplifies entering data and analyzing results, thereby reducing analysis time and errors. Data entry is simpler because input windows are used for each of the FLOPS namelists. These windows contain fields to input the variable's values along with help information describing the variable's function. Analyzing results is simpler because output data are displayed rapidly. This is accomplished in two ways. First, because the output file has been indexed, users can view particular sections with the click of a mouse button. Second, because menu picks have been created, users can plot engine and aircraft performance data. In addition, XFLOPS has a built-in help system and complete on-line documentation for FLOPS.

  1. Deformation fields near a steady fatigue crack with anisotropic plasticity

    DOE PAGES

    Gao, Yanfei

    2015-11-30

    In this work, from finite element simulations based on an irreversible, hysteretic cohesive interface model, a steady fatigue crack can be realized if the crack extension exceeds about twice the plastic zone size, and both the crack increment per loading cycle and the crack bridging zone size are smaller than the plastic zone size. The corresponding deformation fields develop a plastic wake behind the crack tip and a compressive residual stress field ahead of the crack tip. In addition, the Hill’s plasticity model is used to study the role of plastic anisotropy on the retardation of fatigue crack growth andmore » the elastic strain fields. It is found that for Mode-I cyclic loading, an enhanced yield stress in directions that are inclined from the crack plane will lead to slower crack growth rate, but this retardation is insignificant for typical degrees of plastic anisotropy. Furthermore, these results provide key inputs for future comparisons to neutron and synchrotron diffraction measurements that provide full-field lattice strain mapping near fracture and fatigue crack tips, especially in textured materials such as wrought or rolled Mg alloys.« less

  2. Deformation fields near a steady fatigue crack with anisotropic plasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Yanfei

    In this work, from finite element simulations based on an irreversible, hysteretic cohesive interface model, a steady fatigue crack can be realized if the crack extension exceeds about twice the plastic zone size, and both the crack increment per loading cycle and the crack bridging zone size are smaller than the plastic zone size. The corresponding deformation fields develop a plastic wake behind the crack tip and a compressive residual stress field ahead of the crack tip. In addition, the Hill’s plasticity model is used to study the role of plastic anisotropy on the retardation of fatigue crack growth andmore » the elastic strain fields. It is found that for Mode-I cyclic loading, an enhanced yield stress in directions that are inclined from the crack plane will lead to slower crack growth rate, but this retardation is insignificant for typical degrees of plastic anisotropy. Furthermore, these results provide key inputs for future comparisons to neutron and synchrotron diffraction measurements that provide full-field lattice strain mapping near fracture and fatigue crack tips, especially in textured materials such as wrought or rolled Mg alloys.« less

  3. Size Distributions of Solar Proton Events: Methodological and Physical Restrictions

    NASA Astrophysics Data System (ADS)

    Miroshnichenko, L. I.; Yanke, V. G.

    2016-12-01

    Based on the new catalogue of solar proton events (SPEs) for the period of 1997 - 2009 (Solar Cycle 23) we revisit the long-studied problem of the event-size distributions in the context of those constructed for other solar-flare parameters. Recent results on the problem of size distributions of solar flares and proton events are briefly reviewed. Even a cursory acquaintance with this research field reveals a rather mixed and controversial picture. We concentrate on three main issues: i) SPE size distribution for {>} 10 MeV protons in Solar Cycle 23; ii) size distribution of {>} 1 GV proton events in 1942 - 2014; iii) variations of annual numbers for {>} 10 MeV proton events on long time scales (1955 - 2015). Different results are critically compared; most of the studies in this field are shown to suffer from vastly different input datasets as well as from insufficient knowledge of underlying physical processes in the SPEs under consideration. New studies in this field should be made on more distinct physical and methodological bases. It is important to note the evident similarity in size distributions of solar flares and superflares in Sun-like stars.

  4. Applicability of empirical data currently used in predicting solid propellant exhaust plumes

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.; Greenwood, T.; Roberts, B. B.

    1977-01-01

    Theoretical and experimental approaches to exhaust plume analysis are compared. A two-phase model is extended to include treatment of reacting gas chemistry, and thermodynamical modeling of the gaseous phase of the flow field is considered. The applicability of empirical data currently available to define particle drag coefficients, heat transfer coefficients, mean particle size, and particle size distributions is investigated. Experimental and analytical comparisons are presented for subscale solid rocket motors operating at three altitudes with attention to pitot total pressure and stagnation point heating rate measurements. The mathematical treatment input requirements are explained. The two-phase flow field solution adequately predicts gasdynamic properties in the inviscid portion of two-phase exhaust plumes. It is found that prediction of exhaust plume gas pressures requires an adequate model of flow field dynamics.

  5. HONTIOR - HIGHER-ORDER NEURAL NETWORK FOR TRANSFORMATION INVARIANT OBJECT RECOGNITION

    NASA Technical Reports Server (NTRS)

    Spirkovska, L.

    1994-01-01

    Neural networks have been applied in numerous fields, including transformation invariant object recognition, wherein an object is recognized despite changes in the object's position in the input field, size, or rotation. One of the more successful neural network methods used in invariant object recognition is the higher-order neural network (HONN) method. With a HONN, known relationships are exploited and the desired invariances are built directly into the architecture of the network, eliminating the need for the network to learn invariance to transformations. This results in a significant reduction in the training time required, since the network needs to be trained on only one view of each object, not on numerous transformed views. Moreover, one hundred percent accuracy is guaranteed for images characterized by the built-in distortions, providing noise is not introduced through pixelation. The program HONTIOR implements a third-order neural network having invariance to translation, scale, and in-plane rotation built directly into the architecture, Thus, for 2-D transformation invariance, the network needs only to be trained on just one view of each object. HONTIOR can also be used for 3-D transformation invariant object recognition by training the network only on a set of out-of-plane rotated views. Historically, the major drawback of HONNs has been that the size of the input field was limited to the memory required for the large number of interconnections in a fully connected network. HONTIOR solves this problem by coarse coding the input images (coding an image as a set of overlapping but offset coarser images). Using this scheme, large input fields (4096 x 4096 pixels) can easily be represented using very little virtual memory (30Mb). The HONTIOR distribution consists of three main programs. The first program contains the training and testing routines for a third-order neural network. The second program contains the same training and testing procedures as the first, but it also contains a number of functions to display and edit training and test images. Finally, the third program is an auxiliary program which calculates the included angles for a given input field size. HONTIOR is written in C language, and was originally developed for Sun3 and Sun4 series computers. Both graphic and command line versions of the program are provided. The command line version has been successfully compiled and executed both on computers running the UNIX operating system and on DEC VAX series computer running VMS. The graphic version requires the SunTools windowing environment, and therefore runs only on Sun series computers. The executable for the graphics version of HONTIOR requires 1Mb of RAM. The standard distribution medium for HONTIOR is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The package includes sample input and output data. HONTIOR was developed in 1991. Sun, Sun3 and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation.

  6. Input-variable sensitivity assessment for sediment transport relations

    NASA Astrophysics Data System (ADS)

    Fernández, Roberto; Garcia, Marcelo H.

    2017-09-01

    A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.

  7. Spatial and Temporal Extrapolation of Disdrometer Size Distributions Based on a Lagrangian Trajectory Model of Falling Rain

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Kasparis, Takis; Jones, W. Linwood; Metzger, Philip T.

    2009-01-01

    Methodologies to improve disdrometer processing, loosely based on mathematical techniques common to the field of particle flow and fluid mechanics, are examined and tested. The inclusion of advection and vertical wind field estimates appear to produce significantly improved results in a Lagrangian hydrometeor trajectory model, in spite of very strict assumptions of noninteracting hydrometeors, constant vertical air velocity, and time independent advection during the scan time interval. Wind field data can be extracted from each radar elevation scan by plotting and analyzing reflectivity contours over the disdrometer site and by collecting the radar radial velocity data to obtain estimates of advection. Specific regions of disdrometer spectra (drop size versus time) often exhibit strong gravitational sorting signatures, from which estimates of vertical velocity can be extracted. These independent wind field estimates become inputs and initial conditions to the Lagrangian trajectory simulation of falling hydrometeors.

  8. Analysis and characterization of microwave plasma generated with rectangular all-dielectric resonators

    NASA Astrophysics Data System (ADS)

    Kourtzanidis, K.; Raja, L. L.

    2017-04-01

    We report on a computational modeling study of small scale plasma discharge formation with rectangular dielectric resonators (DR). An array of rectangular dielectric slabs, separated by a gap of millimeter dimensions is used to provide resonant response when illuminated by an incident wave of 1.26 GHz. A coupled electromagnetic (EM) wave-plasma model is used to describe the breakdown, early response and steady state of the argon discharge. We characterize the plasma generation with respect to the input power, background gas pressure and gap size. It is found that the plasma discharge is generated mainly inside the gaps between the DR at positions that correspond to the antinodes of the resonant enhanced electric field pattern. The enhancement of the electric field inside the gaps is due to a combination of leaking and displacement current radiation from the DR. The plasma is sustained in over-critical densities due to the large skin depth with respect to the gap and plasma size. Electron densities are calculated in the order of {10}18{--}{10}19 {{{m}}}-3 for a gas pressure of 10 Torr, while they exceed 1020 {{{m}}}-3 in atmospheric conditions. Increase of input power leads to more intense ionization and thus faster plasma formation and results to a more symmetric plasma pattern. For low background gas pressure the discharge is diffusive and extends away from the gap region while in high pressure it is constricted inside the gap. An optimal gap size can be found to provide maximum EM energy transfer to the plasma. This fact demonstrates that the gap size dictates to a certain extent the resonant frequency and the Q-factor of the dielectric array and the breakdown fields can not be determined in a straight-forward way but they are functions of the resonators geometry and incident field frequency.

  9. Direct slow-light excitation in photonic crystal waveguides forming ultra-compact splitters.

    PubMed

    Zhang, Min; Groothoff, Nathaniel; Krüger, Asger Christian; Shi, Peixing; Kristensen, Martin

    2011-04-11

    Based on a series of 1x2 beam splitters, novel direct excitation of slow-light from input- to output-region in photonic crystal waveguides is investigated theoretically and experimentally. The study shows that the slow-light excitation provides over 50 nm bandwidth for TE-polarized light splitting between two output ports, and co-exists together with self-imaging leading to ~20 nm extra bandwidth. The intensity of the direct excitation is qualitatively explained by the overlap integral of the magnetic fields between the ground input- and excited output-modes. The direct excitation of slow light is practically lossless compared with transmission in a W1 photonic crystal waveguides, which broadens the application-field for slow-light and further minimizes the size of a 1x2 splitter. © 2011 Optical Society of America

  10. User manual for semi-circular compact range reflector code: Version 2

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Burnside, Walter D.

    1987-01-01

    A computer code has been developed at the Ohio State University ElectroScience Laboratory to analyze a semi-circular paraboloidal reflector with or without a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the reflector or its individual components at a given distance from the center of the paraboloid. The code computes the fields along a radial, horizontal, vertical or axial cut at that distance. Thus, it is very effective in computing the size of the sweet spot for a semi-circular compact range reflector. This report describes the operation of the code. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.

  11. Multi-flux-transformer MRI detection with an atomic magnetometer.

    PubMed

    Savukov, Igor; Karaulanov, Todor

    2014-12-01

    Recently, anatomical ultra-low field (ULF) MRI has been demonstrated with an atomic magnetometer (AM). A flux-transformer (FT) has been used for decoupling MRI fields and gradients to avoid their negative effects on AM performance. The field of view (FOV) was limited because of the need to compromise between the size of the FT input coil and MRI sensitivity per voxel. Multi-channel acquisition is a well-known solution to increase FOV without significantly reducing sensitivity. In this paper, we demonstrate twofold FOV increase with the use of three FT input coils. We also show that it is possible to use a single atomic magnetometer and single acquisition channel to acquire three independent MRI signals by applying a frequency-encoding gradient along the direction of the detection array span. The approach can be generalized to more channels and can be critical for imaging applications of non-cryogenic ULF MRI where FOV needs to be large, including head, hand, spine, and whole-body imaging. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Multi-flux-transformer MRI detection with an atomic magnetometer

    PubMed Central

    Savukov, Igor; Karaulanov, Todor

    2014-01-01

    Recently, anatomical ultra-low field (ULF) MRI has been demonstrated with an atomic magnetometer (AM). A flux-transformer (FT) has been used for decoupling MRI fields and gradients to avoid their negative effects on AM performance. The field of view (FOV) was limited because of the need to compromise between the size of the FT input coil and MRI sensitivity per voxel. Multi-channel acquisition is a well-known solution to increase FOV without significantly reducing sensitivity. In this paper, we demonstrate two-fold FOV increase with the use of three FT input coils. We also show that it is possible to use a single atomic magnetometer and single acquisition channel to acquire three independent MRI signals by applying a frequency-encoding gradient along the direction of the detection array span. The approach can be generalized to more channels and can be critical for imaging applications of non-cryogenic ULF MRI where FOV needs to be large, including head, hand, spine, and whole-body imaging. PMID:25462946

  13. Bas-relief generation using adaptive histogram equalization.

    PubMed

    Sun, Xianfang; Rosin, Paul L; Martin, Ralph R; Langbein, Frank C

    2009-01-01

    An algorithm is presented to automatically generate bas-reliefs based on adaptive histogram equalization (AHE), starting from an input height field. A mesh model may alternatively be provided, in which case a height field is first created via orthogonal or perspective projection. The height field is regularly gridded and treated as an image, enabling a modified AHE method to be used to generate a bas-relief with a user-chosen height range. We modify the original image-contrast-enhancement AHE method to use gradient weights also to enhance the shape features of the bas-relief. To effectively compress the height field, we limit the height-dependent scaling factors used to compute relative height variations in the output from height variations in the input; this prevents any height differences from having too great effect. Results of AHE over different neighborhood sizes are averaged to preserve information at different scales in the resulting bas-relief. Compared to previous approaches, the proposed algorithm is simple and yet largely preserves original shape features. Experiments show that our results are, in general, comparable to and in some cases better than the best previously published methods.

  14. Statistical Tools for Determining Fitness to Fly

    DTIC Science & Technology

    1981-09-01

    program. (a) Number of Cards in file: 13 (b) Layout of Card 1: iIi Field Length a•e. Variable 1 8 Real EFAIL : Average # of failures for size of control...Method Compute Survival Probability and Frequency Tables 4-4 END 25P FLUW CHARTS 27 QSTART Input EFAIL ,CYEAR,NVAR,NAV,XINC BB~i i=1,3 name (1) i=1,4 Call

  15. User's manual for semi-circular compact range reflector code

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Burnside, Walter D.

    1986-01-01

    A computer code was developed to analyze a semi-circular paraboloidal reflector antenna with a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the antenna or its individual components at a given distance from the center of the paraboloid. Thus, it is very effective in computing the size of the sweet spot for RCS or antenna measurement. The operation of the code is described. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.

  16. Quantitative Image Restoration in Bright Field Optical Microscopy.

    PubMed

    Gutiérrez-Medina, Braulio; Sánchez Miranda, Manuel de Jesús

    2017-11-07

    Bright field (BF) optical microscopy is regarded as a poor method to observe unstained biological samples due to intrinsic low image contrast. We introduce quantitative image restoration in bright field (QRBF), a digital image processing method that restores out-of-focus BF images of unstained cells. Our procedure is based on deconvolution, using a point spread function modeled from theory. By comparing with reference images of bacteria observed in fluorescence, we show that QRBF faithfully recovers shape and enables quantify size of individual cells, even from a single input image. We applied QRBF in a high-throughput image cytometer to assess shape changes in Escherichia coli during hyperosmotic shock, finding size heterogeneity. We demonstrate that QRBF is also applicable to eukaryotic cells (yeast). Altogether, digital restoration emerges as a straightforward alternative to methods designed to generate contrast in BF imaging for quantitative analysis. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  17. National Atmospheric Release Advisory Center dispersion modeling of the Full-scale Radiological Dispersal device (FSRDD) field trials

    DOE PAGES

    Neuscamman, Stephanie J.; Yu, Kristen L.

    2016-05-01

    The results of the National Atmospheric Release Advisory Center (NARAC) model simulations are compared to measured data from the Full-Scale Radiological Dispersal Device (FSRDD) field trials. The series of explosive radiological dispersal device (RDD) experiments was conducted in 2012 by Defence Research and Development Canada (DRDC) and collaborating organizations. During the trials, a wealth of data was collected, including a variety of deposition and air concentration measurements. The experiments were conducted with one of the stated goals being to provide measurements to atmospheric dispersion modelers. These measurements can be used to facilitate important model validation studies. For this study, meteorologicalmore » observations recorded during the tests are input to the diagnostic meteorological model, ADAPT, which provides 3–D, time-varying mean wind and turbulence fields to the LODI dispersion model. LODI concentration and deposition results are compared to the measured data, and the sensitivity of the model results to changes in input conditions (such as the particle activity size distribution of the source) and model physics (such as the rise of the buoyant cloud of explosive products) is explored. The NARAC simulations predicted the experimentally measured deposition results reasonably well considering the complexity of the release. Lastly, changes to the activity size distribution of the modeled particles can improve the agreement of the model results to measurement.« less

  18. Optimal input sizes for neural network de-interlacing

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Seo, Guiwon; Lee, Chulhee

    2009-02-01

    Neural network de-interlacing has shown promising results among various de-interlacing methods. In this paper, we investigate the effects of input size for neural networks for various video formats when the neural networks are used for de-interlacing. In particular, we investigate optimal input sizes for CIF, VGA and HD video formats.

  19. Machine Learning Classification of Heterogeneous Fields to Estimate Physical Responses

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Akhriev, A.; Alzate, C.; Zhuk, S.

    2017-12-01

    The promise of machine learning to enhance physics-based simulation is examined here using the transient pressure response to a pumping well in a heterogeneous aquifer. 10,000 random fields of log10 hydraulic conductivity (K) are created and conditioned on a single K measurement at the pumping well. Each K-field is used as input to a forward simulation of drawdown (pressure decline). The differential equations governing groundwater flow to the well serve as a non-linear transform of the input K-field to an output drawdown field. The results are stored and the data set is split into training and testing sets for classification. A Euclidean distance measure between any two fields is calculated and the resulting distances between all pairs of fields define a similarity matrix. Similarity matrices are calculated for both input K-fields and the resulting drawdown fields at the end of the simulation. The similarity matrices are then used as input to spectral clustering to determine groupings of similar input and output fields. Additionally, the similarity matrix is used as input to multi-dimensional scaling to visualize the clustering of fields in lower dimensional spaces. We examine the ability to cluster both input K-fields and output drawdown fields separately with the goal of identifying K-fields that create similar drawdowns and, conversely, given a set of simulated drawdown fields, identify meaningful clusters of input K-fields. Feature extraction based on statistical parametric mapping provides insight into what features of the fields drive the classification results. The final goal is to successfully classify input K-fields into the correct output class, and also, given an output drawdown field, be able to infer the correct class of input field that created it.

  20. Voltage sensing systems and methods for passive compensation of temperature related intrinsic phase shift

    DOEpatents

    Davidson, James R.; Lassahn, Gordon D.

    2001-01-01

    A small sized electro-optic voltage sensor capable of accurate measurement of high levels of voltages without contact with a conductor or voltage source is provided. When placed in the presence of an electric field, the sensor receives an input beam of electromagnetic radiation into the sensor. A polarization beam displacer serves as a filter to separate the input beam into two beams with orthogonal linear polarizations. The beam displacer is oriented in such a way as to rotate the linearly polarized beams such that they enter a Pockels crystal at a preferred angle of 45 degrees. The beam displacer is therefore capable of causing a linearly polarized beam to impinge a crystal at a desired angle independent of temperature. The Pockels electro-optic effect induces a differential phase shift on the major and minor axes of the input beam as it travels through the Pockels crystal, which causes the input beam to be elliptically polarized. A reflecting prism redirects the beam back through the crystal and the beam displacer. On the return path, the polarization beam displacer separates the elliptically polarized beam into two output beams of orthogonal linear polarization representing the major and minor axes. In crystals that introduce a phase differential attributable to temperature, a compensating crystal is provided to cancel the effect of temperature on the phase differential of the input beam. The system may include a detector for converting the output beams into electrical signals, and a signal processor for determining the voltage based on an analysis of the output beams. The output beams are amplitude modulated by the frequency of the electric field and the amplitude of the output beams is proportional to the magnitude of the electric field, which is related to the voltage being measured.

  1. Three-dimensional object recognition using similar triangles and decision trees

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

  2. Comparative study of bolometric and non-bolometric switching elements for microwave phase shifters

    NASA Technical Reports Server (NTRS)

    Tabib-Azar, Massood; Bhasin, Kul B.; Romanofsky, Robert R.

    1991-01-01

    The performance of semiconductor and high critical temperature superconductor switches is compared as they are used in delay-line-type microwave and millimeter-wave phase shifters. Such factors as their ratios of the off-to-on resistances, parasitic reactances, power consumption, speed, input-to-output isolation, ease of fabrication, and physical dimensions are compared. Owing to their almost infinite off-to-on resistance ratio and excellent input-to-output isolation, bolometric superconducting switches appear to be quite suitable for use in microwave phase shifters; their only drawbacks are their speed and size. The SUPERFET, a novel device whose operation is based on the electric field effect in high critical temperature ceramic superconductors is also discussed. Preliminary results indicate that the SUPERFET is fast and that it can be scaled; therefore, it can be fabricated with dimensions comparable to semiconductor field-effect transistors.

  3. Applications of remote sensing, volume 1

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. ECHO successfully exploits the redundancy of states characteristics of sampled imagery of ground scenes to achieve better classification accuracy, reduce the number of classifications required, and reduce the variability of classification results. The information required to produce ECHO classifications are cell size, cell homogeneity, cell-to-field annexation parameters, input data, and a class conditional marginal density statistics deck.

  4. Development of size reduction equations for calculating power input for grinding pine wood chips using hammer mill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naimi, Ladan J.; Collard, Flavien; Bi, Xiaotao

    Size reduction is an unavoidable operation for preparing biomass for biofuels and bioproduct conversion. Yet, there is considerable uncertainty in power input requirement and the uniformity of ground biomass. Considerable gains are possible if the required power input for a size reduction ratio is estimated accurately. In this research three well-known mechanistic equations attributed to Rittinger, Kick, and Bond available for predicting energy input for grinding pine wood chips were tested against experimental grinding data. Prior to testing, samples of pine wood chips were conditioned to 11.7% wb, moisture content. The wood chips were successively ground in a hammer millmore » using screen sizes of 25.4 mm, 10 mm, 6.4 mm, and 3.2 mm. The input power and the flow of material into the grinder were recorded continuously. The recorded power input vs. mean particle size showed that the Rittinger equation had the best fit to the experimental data. The ground particle sizes were 4 to 7 times smaller than the size of installed screen. Geometric mean size of particles were calculated using two methods (1) Tyler sieves and using particle size analysis and (2) Sauter mean diameter calculated from the ratio of volume to surface that were estimated from measured length and width. The two mean diameters agreed well, pointing to the fact that either mechanical sieving or particle imaging can be used to characterize particle size. In conclusion, specific energy input to the hammer mill increased from 1.4 kWh t –1 (5.2 J g –1) for large 25.1-mm screen to 25 kWh t –1 (90.4 J g –1) for small 3.2-mm screen.« less

  5. Development of size reduction equations for calculating power input for grinding pine wood chips using hammer mill

    DOE PAGES

    Naimi, Ladan J.; Collard, Flavien; Bi, Xiaotao; ...

    2016-01-05

    Size reduction is an unavoidable operation for preparing biomass for biofuels and bioproduct conversion. Yet, there is considerable uncertainty in power input requirement and the uniformity of ground biomass. Considerable gains are possible if the required power input for a size reduction ratio is estimated accurately. In this research three well-known mechanistic equations attributed to Rittinger, Kick, and Bond available for predicting energy input for grinding pine wood chips were tested against experimental grinding data. Prior to testing, samples of pine wood chips were conditioned to 11.7% wb, moisture content. The wood chips were successively ground in a hammer millmore » using screen sizes of 25.4 mm, 10 mm, 6.4 mm, and 3.2 mm. The input power and the flow of material into the grinder were recorded continuously. The recorded power input vs. mean particle size showed that the Rittinger equation had the best fit to the experimental data. The ground particle sizes were 4 to 7 times smaller than the size of installed screen. Geometric mean size of particles were calculated using two methods (1) Tyler sieves and using particle size analysis and (2) Sauter mean diameter calculated from the ratio of volume to surface that were estimated from measured length and width. The two mean diameters agreed well, pointing to the fact that either mechanical sieving or particle imaging can be used to characterize particle size. In conclusion, specific energy input to the hammer mill increased from 1.4 kWh t –1 (5.2 J g –1) for large 25.1-mm screen to 25 kWh t –1 (90.4 J g –1) for small 3.2-mm screen.« less

  6. Optimizing photophoresis and asymmetric force fields for grading of Brownian particles.

    PubMed

    Neild, Adrian; Ng, Tuck Wah; Woods, Timothy

    2009-12-10

    We discuss a scheme that incorporates restricted spatial input location, orthogonal sort, and movement direction features, with particle sorting achieved by using an asymmetric potential cycled on and off, while movement is accomplished by photophoresis. Careful investigation has uncovered the odds of sorting between certain pairs of particle sizes to be solely dependent on radii in each phase of the process. This means that the most effective overall sorting can be achieved by maximizing the number of phases. This optimized approach is demonstrated using numerical simulation to permit grading of a range of nanometer-scale particle sizes.

  7. Mapping global cropland and field size.

    PubMed

    Fritz, Steffen; See, Linda; McCallum, Ian; You, Liangzhi; Bun, Andriy; Moltchanova, Elena; Duerauer, Martina; Albrecht, Fransizka; Schill, Christian; Perger, Christoph; Havlik, Petr; Mosnier, Aline; Thornton, Philip; Wood-Sichra, Ulrike; Herrero, Mario; Becker-Reshef, Inbal; Justice, Chris; Hansen, Matthew; Gong, Peng; Abdel Aziz, Sheta; Cipriani, Anna; Cumani, Renato; Cecchi, Giuliano; Conchedda, Giulia; Ferreira, Stefanus; Gomez, Adriana; Haffani, Myriam; Kayitakire, Francois; Malanding, Jaiteh; Mueller, Rick; Newby, Terence; Nonguierma, Andre; Olusegun, Adeaga; Ortner, Simone; Rajak, D Ram; Rocha, Jansle; Schepaschenko, Dmitry; Schepaschenko, Maria; Terekhov, Alexey; Tiangwa, Alex; Vancutsem, Christelle; Vintrou, Elodie; Wenbin, Wu; van der Velde, Marijn; Dunwoody, Antonia; Kraxner, Florian; Obersteiner, Michael

    2015-05-01

    A new 1 km global IIASA-IFPRI cropland percentage map for the baseline year 2005 has been developed which integrates a number of individual cropland maps at global to regional to national scales. The individual map products include existing global land cover maps such as GlobCover 2005 and MODIS v.5, regional maps such as AFRICOVER and national maps from mapping agencies and other organizations. The different products are ranked at the national level using crowdsourced data from Geo-Wiki to create a map that reflects the likelihood of cropland. Calibration with national and subnational crop statistics was then undertaken to distribute the cropland within each country and subnational unit. The new IIASA-IFPRI cropland product has been validated using very high-resolution satellite imagery via Geo-Wiki and has an overall accuracy of 82.4%. It has also been compared with the EarthStat cropland product and shows a lower root mean square error on an independent data set collected from Geo-Wiki. The first ever global field size map was produced at the same resolution as the IIASA-IFPRI cropland map based on interpolation of field size data collected via a Geo-Wiki crowdsourcing campaign. A validation exercise of the global field size map revealed satisfactory agreement with control data, particularly given the relatively modest size of the field size data set used to create the map. Both are critical inputs to global agricultural monitoring in the frame of GEOGLAM and will serve the global land modelling and integrated assessment community, in particular for improving land use models that require baseline cropland information. These products are freely available for downloading from the http://cropland.geo-wiki.org website. © 2015 John Wiley & Sons Ltd.

  8. Electro-optic high voltage sensor

    DOEpatents

    Davidson, James R.; Seifert, Gary D.

    2002-01-01

    A small sized electro-optic voltage sensor capable of accurate measurement of high levels of voltages without contact with a conductor or voltage source is provided. When placed in the presence of an electric field, the sensor receives an input beam of electromagnetic radiation into the sensor. A polarization beam displacer serves as a filter to separate the input beam into two beams with orthogonal linear polarizations. The beam displacer is oriented in such a way as to rotate the linearly polarized beams such that they enter a Pockels crystal having at a preferred angle of 45 degrees. The beam displacer is therefore capable of causing a linearly polarized beam to impinge a crystal at a desired angle independent of temperature. The Pockels electro-optic effect induces a differential phase shift on the major and minor axes of the input beam as it travels through the Pockels crystal, which causes the input beam to be elliptically polarized. A reflecting prism redirects the beam back through the crystal and the beam displacer. On the return path, the polarization beam displacer separates the elliptically polarized beam into two output beams of orthogonal linear polarization representing the major and minor axes. The system may include a detector for converting the output beams into electrical signals, and a signal processor for determining the voltage based on an analysis of the output beams. The output beams are amplitude modulated by the frequency of the electric field and the amplitude of the output beams is proportional to the magnitude of the electric field, which is related to the voltage being measured.

  9. Impact of input field characteristics on vibrational femtosecond coherent anti-Stokes Raman scattering thermometry.

    PubMed

    Yang, Chao-Bo; He, Ping; Escofet-Martin, David; Peng, Jiang-Bo; Fan, Rong-Wei; Yu, Xin; Dunn-Rankin, Derek

    2018-01-10

    In this paper, three ultrashort-pulse coherent anti-Stokes Raman scattering (CARS) thermometry approaches are summarized with a theoretical time-domain model. The difference between the approaches can be attributed to variations in the input field characteristics of the time-domain model. That is, all three approaches of ultrashort-pulse (CARS) thermometry can be simulated with the unified model by only changing the input fields features. As a specific example, the hybrid femtosecond/picosecond CARS is assessed for its use in combustion flow diagnostics; thus, the examination of the input field has an impact on thermometry focuses on vibrational hybrid femtosecond/picosecond CARS. Beginning with the general model of ultrashort-pulse CARS, the spectra with different input field parameters are simulated. To analyze the temperature measurement error brought by the input field impacts, the spectra are fitted and compared to fits, with the model neglecting the influence introduced by the input fields. The results demonstrate that, however the input pulses are depicted, temperature errors still would be introduced during an experiment. With proper field characterization, however, the significance of the error can be reduced.

  10. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  11. Piezoelectric transformers for low-voltage generation of gas discharges and ionic winds in atmospheric air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Michael J.; Go, David B., E-mail: dgo@nd.edu; Department of Chemical and Biomolecular Engineering, University of Notre Dame, Notre Dame, Indianapolis 46556

    To generate a gas discharge (plasma) in atmospheric air requires an electric field that exceeds the breakdown threshold of ∼30 kV/cm. Because of safety, size, or cost constraints, the large applied voltages required to generate such fields are often prohibitive for portable applications. In this work, piezoelectric transformers are used to amplify a low input applied voltage (<30 V) to generate breakdown in air without the need for conventional high-voltage electrical equipment. Piezoelectric transformers (PTs) use their inherent electromechanical resonance to produce a voltage amplification, such that the surface of the piezoelectric exhibits a large surface voltage that can generate corona-like dischargesmore » on its corners or on adjacent electrodes. In the proper configuration, these discharges can be used to generate a bulk air flow called an ionic wind. In this work, PT-driven discharges are characterized by measuring the discharge current and the velocity of the induced ionic wind with ionic winds generated using input voltages as low as 7 V. The characteristics of the discharge change as the input voltage increases; this modifies the resonance of the system and subsequent required operating parameters.« less

  12. Wide operating window spin-torque majority gate towards large-scale integration of logic circuits

    NASA Astrophysics Data System (ADS)

    Vaysset, Adrien; Zografos, Odysseas; Manfrini, Mauricio; Mocuta, Dan; Radu, Iuliana P.

    2018-05-01

    Spin Torque Majority Gate (STMG) is a logic concept that inherits the non-volatility and the compact size of MRAM devices. In the original STMG design, the operating range was restricted to very small size and anisotropy, due to the exchange-driven character of domain expansion. Here, we propose an improved STMG concept where the domain wall is driven with current. Thus, input switching and domain wall propagation are decoupled, leading to higher energy efficiency and allowing greater technological optimization. To ensure majority operation, pinning sites are introduced. We observe through micromagnetic simulations that the new structure works for all input combinations, regardless of the initial state. Contrary to the original concept, the working condition is only given by threshold and depinning currents. Moreover, cascading is now possible over long distances and fan-out is demonstrated. Therefore, this improved STMG concept is ready to build complete Boolean circuits in absence of external magnetic fields.

  13. Modeling the truebeam linac using a CAD to Geant4 geometry implementation: dose and IAEA-compliant phase space calculations.

    PubMed

    Constantin, Magdalena; Perl, Joseph; LoSasso, Tom; Salop, Arthur; Whittum, David; Narula, Anisha; Svatos, Michelle; Keall, Paul J

    2011-07-01

    To create an accurate 6 MV Monte Carlo simulation phase space for the Varian TrueBeam treatment head geometry imported from CAD (computer aided design) without adjusting the input electron phase space parameters. GEANT4 v4.9.2.p01 was employed to simulate the 6 MV beam treatment head geometry of the Varian TrueBeam linac. The electron tracks in the linear accelerator were simulated with Parmela, and the obtained electron phase space was used as an input to the Monte Carlo beam transport and dose calculations. The geometry components are tessellated solids included in GEANT4 as GDML (generalized dynamic markup language) files obtained via STEP (standard for the exchange of product) export from Pro/Engineering, followed by STEP import in Fastrad, a STEP-GDML converter. The linac has a compact treatment head and the small space between the shielding collimator and the divergent are of the upper jaws forbids the implementation of a plane for storing the phase space. Instead, an IAEA (International Atomic Energy Agency) compliant phase space writer was implemented on a cylindrical surface. The simulation was run in parallel on a 1200 node Linux cluster. The 6 MV dose calculations were performed for field sizes varying from 4 x 4 to 40 x 40 cm2. The voxel size for the 60 x 60 x 40 cm3 water phantom was 4 x 4 x 4 mm3. For the 10 x 10 cm2 field, surface buildup calculations were performed using 4 x 4 x 2 mm3 voxels within 20 mm of the surface. For the depth dose curves, 98% of the calculated data points agree within 2% with the experimental measurements for depths between 2 and 40 cm. For depths between 5 and 30 cm, agreement within 1% is obtained for 99% (4 x 4), 95% (10 x 10), 94% (20 x 20 and 30 x 30), and 89% (40 x 40) of the data points, respectively. In the buildup region, the agreement is within 2%, except at 1 mm depth where the deviation is 5% for the 10 x 10 cm2 open field. For the lateral dose profiles, within the field size for fields up to 30 x 30 cm2, the agreement is within 2% for depths up to 10 cm. At 20 cm depth, the in-field maximum dose difference for the 30 x 30 cm2 open field is within 4%, while the smaller field sizes agree within 2%. Outside the field size, agreement within 1% of the maximum dose difference is obtained for all fields. The calculated output factors varied from 0.938 +/- 0.015 for the 4 x 4 cm2 field to 1.088 +/- 0.024 for the 40 x 40 cm2 field. Their agreement with the experimental output factors is within 1%. The authors have validated a GEANT4 simulated IAEA-compliant phase space of the TrueBeam linac for the 6 MV beam obtained using a high accuracy geometry implementation from CAD. These files are publicly available and can be used for further research.

  14. From grid cells to place cells with realistic field sizes

    PubMed Central

    2017-01-01

    While grid cells in the medial entorhinal cortex (MEC) of rodents have multiple, regularly arranged firing fields, place cells in the cornu ammonis (CA) regions of the hippocampus mostly have single spatial firing fields. Since there are extensive projections from MEC to the CA regions, many models have suggested that a feedforward network can transform grid cell firing into robust place cell firing. However, these models generate place fields that are consistently too small compared to those recorded in experiments. Here, we argue that it is implausible that grid cell activity alone can be transformed into place cells with robust place fields of realistic size in a feedforward network. We propose two solutions to this problem. Firstly, weakly spatially modulated cells, which are abundant throughout EC, provide input to downstream place cells along with grid cells. This simple model reproduces many place cell characteristics as well as results from lesion studies. Secondly, the recurrent connections between place cells in the CA3 network generate robust and realistic place fields. Both mechanisms could work in parallel in the hippocampal formation and this redundancy might account for the robustness of place cell responses to a range of disruptions of the hippocampal circuitry. PMID:28750005

  15. Effects of finite ground plane on the radiation characteristics of a circular patch antenna

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Arun K.

    1990-02-01

    An analytical technique to determine the effects of finite ground plane on the radiation characteristics of a microstrip antenna is presented. The induced currents on the ground plane and on the upper surface of the patch are determined from the discontinuity of the near field produced by the equivalent magnetic current source on the physical aperture of the patch. The radiated fields contributed by the induced current on the ground plane and the equivalent sources on the physical aperture yield the radiation pattern of the antenna. Radiation patterns of the circular patch with finite ground plane size are computed and compared with the experimental data, and the agreement is found to be good. The radiation pattern, directive gain, and input impedance are found to vary widely with the ground plane size.

  16. Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.

    PubMed

    Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew

    2014-12-26

    Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.

  17. Jupiter's outer atmosphere.

    NASA Technical Reports Server (NTRS)

    Brice, N. M.

    1973-01-01

    The current state of the theory of Jupiter's outer atmosphere is briefly reviewed. The similarities and dissimilarities between the terrestrial and Jovian upper atmospheres are discussed, including the interaction of the solar wind with the planetary magnetic fields. Estimates of Jovian parameters are given, including magnetosphere and auroral zone sizes, ionospheric conductivity, energy inputs, and solar wind parameters at Jupiter. The influence of the large centrifugal force on the cold plasma distribution is considered. The Jovian Van Allen belt is attributed to solar wind particles diffused in toward the planet by dynamo electric fields from ionospheric neutral winds, and the consequences of this theory are indicated.

  18. Grain growth in U–7Mo alloy: A combined first-principles and phase field study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Zhi-Gang; Liang, Linyun; Kim, Yeon Soo

    2016-05-01

    Grain size is an important factor in controlling the swelling behavior in irradiated U-Mo dispersion fuels. Increasing the grain size in UeMo fuel particles by heat treatment is believed to delay the fuel swelling at high fission density. In this work, a multiscale simulation approach combining first-principles calculation and phase field modeling is used to investigate the grain growth behavior in U-7Mo alloy. The density functional theory based first-principles calculations were used to predict the material properties of U-7Mo alloy. The obtained grain boundary energies were then adopted as an input parameter for mesoscale phase field simulations. The effects ofmore » annealing temperature, annealing time and initial grain structures of fuel particles on the grain growth in U-7Mo alloy were examined. The predicted grain growth rate compares well with the empirical correlation derived from experiments. (C) 2016 Elsevier B.V. All rights reserved.« less

  19. Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays.

    PubMed

    Hu, Cheng; Wang, Jingyang; Tian, Weiming; Zeng, Tao; Wang, Rui

    2017-03-15

    Multiple-Input Multiple-Output (MIMO) radar provides much more flexibility than the traditional radar thanks to its ability to realize far more observation channels than the actual number of transmit and receive (T/R) elements. In designing the MIMO imaging radar arrays, the commonly used virtual array theory generally assumes that all elements are on the same line. However, due to the physical size of the antennas and coupling effect between T/R elements, a certain height difference between T/R arrays is essential, which will result in the defocusing of edge points of the scene. On the other hand, the virtual array theory implies far-field approximation. Therefore, with a MIMO array designed by this theory, there will exist inevitable high grating lobes in the imaging results of near-field edge points of the scene. To tackle these problems, this paper derives the relationship between target's point spread function (PSF) and pattern of T/R arrays, by which the design criterion is presented for near-field imaging MIMO arrays. Firstly, the proper height between T/R arrays is designed to focus the near-field edge points well. Secondly, the far-field array is modified to suppress the grating lobes in the near-field area. Finally, the validity of the proposed methods is verified by two simulations and an experiment.

  20. Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays

    PubMed Central

    Hu, Cheng; Wang, Jingyang; Tian, Weiming; Zeng, Tao; Wang, Rui

    2017-01-01

    Multiple-Input Multiple-Output (MIMO) radar provides much more flexibility than the traditional radar thanks to its ability to realize far more observation channels than the actual number of transmit and receive (T/R) elements. In designing the MIMO imaging radar arrays, the commonly used virtual array theory generally assumes that all elements are on the same line. However, due to the physical size of the antennas and coupling effect between T/R elements, a certain height difference between T/R arrays is essential, which will result in the defocusing of edge points of the scene. On the other hand, the virtual array theory implies far-field approximation. Therefore, with a MIMO array designed by this theory, there will exist inevitable high grating lobes in the imaging results of near-field edge points of the scene. To tackle these problems, this paper derives the relationship between target’s point spread function (PSF) and pattern of T/R arrays, by which the design criterion is presented for near-field imaging MIMO arrays. Firstly, the proper height between T/R arrays is designed to focus the near-field edge points well. Secondly, the far-field array is modified to suppress the grating lobes in the near-field area. Finally, the validity of the proposed methods is verified by two simulations and an experiment. PMID:28294996

  1. Soil aggregate stability and rainfall-induced sediment transport on field plots as affected by amendment with organic matter inputs

    NASA Astrophysics Data System (ADS)

    Shi, Pu; Arter, Christian; Liu, Xingyu; Keller, Martin; Schulin, Rainer

    2017-04-01

    Aggregate stability is an important factor in soil resistance against erosion, and, by influencing the extent of sediment transport associated with surface runoff, it is thus also one of the key factors which determine on- and off-site effects of water erosion. As it strongly depends on soil organic matter, many studies have explored how aggregate stability can be improved by organic matter inputs into the soil. However, the focus of these studies has been on the relationship between aggregate stability and soil organic matter dynamics. How the effects of organic matter inputs on aggregate stability translate into soil erodibility under rainfall impacts has received much less attention. In this study, we performed field plot experiments to examine how organic matter inputs affect aggregate breakdown and surface sediment transport under field conditions in artificial rainfall events. Three pairs of plots were prepared by adding a mixture of grass and wheat straw to one of plots in each pair but not to the other, while all plots were treated in the same way otherwise. The rainfall events were applied some weeks later so that the applied organic residues had sufficient time for decomposition and incorporation into the soil. Surface runoff rate and sediment concentration showed substantial differences between the treatments with and without organic matter inputs. The plots with organic inputs had coarser and more stable aggregates and a rougher surface than the control plots without organic inputs, resulting in a higher infiltration rate and lower transport capacity of the surface runoff. Consequently, sediments exported from the amended plots were less concentrated but more enriched in suspended particles (<20 µm) than from the un-amended plots, indicating a more size-selective sediment transport. In contrast to the amended plots, there was an increase in the coarse particle fraction (> 250 µm) in the runoff from the plots with no organic matter inputs towards the end of the rainfall events due to emerging bed-load transport. The results show that a single application of organic matter can already cause a large difference in aggregate breakdown, surface sealing, and lateral sediment-associated matter transfer under rainfall impact. Furthermore, we will present terrestrial laser scanning data showing the treatment effects on soil surface structure, as well as data on carbon, phosphorus and heavy metal export associated with the translocation of the sediments.

  2. Excitation of a Parallel Plate Waveguide by an Array of Rectangular Waveguides

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam

    2011-01-01

    This work addresses the problem of excitation of a parallel plate waveguide by an array of rectangular waveguides that arises in applications such as the continuous transverse stub (CTS) antenna and dual-polarized parabolic cylindrical reflector antennas excited by a scanning line source. In order to design the junction region between the parallel plate waveguide and the linear array of rectangular waveguides, waveguide sizes have to be chosen so that the input match is adequate for the range of scan angles for both polarizations. Electromagnetic wave scattered by the junction of a parallel plate waveguide by an array of rectangular waveguides is analyzed by formulating coupled integral equations for the aperture electric field at the junction. The integral equations are solved by the method of moments. In order to make the computational process efficient and accurate, the method of weighted averaging was used to evaluate rapidly oscillating integrals encountered in the moment matrix. In addition, the real axis spectral integral is evaluated in a deformed contour for speed and accuracy. The MoM results for a large finite array have been validated by comparing its reflection coefficients with corresponding results for an infinite array generated by the commercial finite element code, HFSS. Once the aperture electric field is determined by MoM, the input reflection coefficients at each waveguide port, and coupling for each polarization over the range of useful scan angles, are easily obtained. Results for the input impedance and coupling characteristics for both the vertical and horizontal polarizations are presented over a range of scan angles. It is shown that the scan range is limited to about 35 for both polarizations and therefore the optimum waveguide is a square of size equal to about 0.62 free space wavelength.

  3. Transport, retention, and size perturbation of graphene oxide in saturated porous media: Effects of input concentration and grain size

    USDA-ARS?s Scientific Manuscript database

    Accurately predicting the fate and transport of graphene oxide (GO) in porous media is critical to assess its environmental impact. In this work, sand column experiments were conducted to determine the effect of input concentration and grain size on transport, retention, and size perturbation of GO ...

  4. Mapping energetics of atom probe evaporation events through first principles calculations.

    PubMed

    Peralta, Joaquín; Broderick, Scott R; Rajan, Krishna

    2013-09-01

    The purpose of this work is to use atomistic modeling to determine accurate inputs into the atom probe tomography (APT) reconstruction process. One of these inputs is evaporation field; however, a challenge occurs because single ions and dimers have different evaporation fields. We have calculated the evaporation field of Al and Sc ions and Al-Al and Al-Sc dimers from an L1₂-Al₃Sc surface using ab initio calculations and with a high electric field applied to the surface. The evaporation field is defined as the electric field at which the energy barrier size is calculated as zero, corresponding to the minimum field that atoms from the surface can break their bonds and evaporate from the surface. The evaporation field of the surface atoms are ranked from least to greatest as: Al-Al dimer, Al ion, Sc ion, and Al-Sc dimer. The first principles results were compared with experimental data in the form of an ion evaporation map, which maps multi-ion evaporations. From the ion evaporation map of L1₂-Al₃Sc, we extract relative evaporation fields and identify that an Al-Al dimer has a lower evaporation field than an Al-Sc dimer. Additionally, comparatively an Al-Al surface dimer is more likely to evaporate as a dimer, while an Al-Sc surface dimer is more likely to evaporate as single ions. These conclusions from the experiment agree with the ab initio calculations, validating the use of this approach for modeling APT energetics. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Linyun; Mei, Zhi -Gang; Kim, Yeon Soo

    A mesoscale model is developed by integrating the rate theory and phase-field models and is used to study the fission-induced recrystallization in U-7Mo alloy. The rate theory model is used to predict the dislocation density and the recrystallization nuclei density due to irradiation. The predicted fission rate and temperature dependences of the dislocation density are in good agreement with experimental measurements. This information is used as input for the multiphase phase-field model to investigate the fission-induced recrystallization kinetics. The simulated recrystallization volume fraction and bubble induced swelling agree well with experimental data. The effects of the fission rate, initial grainmore » size, and grain morphology on the recrystallization kinetics are discussed based on an analysis of recrystallization growth rate using the modified Avrami equation. Here, we conclude that the initial microstructure of the U-Mo fuels, especially the grain size, can be used to effectively control the rate of fission-induced recrystallization and therefore swelling.« less

  6. Intense transient electric field sensor based on the electro-optic effect of LiNbO{sub 3}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Qing, E-mail: yangqing@cqu.edu.cn; Sun, Shangpeng; Han, Rui

    2015-10-15

    Intense transient electric field measurements are widely applied in various research areas. An optical intense E-field sensor for time-domain measurements, based on the electro-optic effect of lithium niobate, has been studied in detail. Principles and key issues in the design of the sensor are presented. The sensor is insulated, small in size (65 mm × 15 mm × 15 mm), and suitable for high-intensity (<801 kV/m) electric field measurements over a wide frequency band (10 Hz–10 MHz). The input/output characteristics of the sensor were obtained and the sensor calibrated. Finally, an application using this sensor in testing laboratory lightning impulsesmore » and in measuring transient electric fields during switch-on of a disconnector confirmed that the sensor is expected to find widespread use in transient intense electric field measurement applications.« less

  7. Intense transient electric field sensor based on the electro-optic effect of LiNbO3

    NASA Astrophysics Data System (ADS)

    Yang, Qing; Sun, Shangpeng; Han, Rui; Sima, Wenxia; Liu, Tong

    2015-10-01

    Intense transient electric field measurements are widely applied in various research areas. An optical intense E-field sensor for time-domain measurements, based on the electro-optic effect of lithium niobate, has been studied in detail. Principles and key issues in the design of the sensor are presented. The sensor is insulated, small in size (65 mm × 15 mm × 15 mm), and suitable for high-intensity (<801 kV/m) electric field measurements over a wide frequency band (10 Hz-10 MHz). The input/output characteristics of the sensor were obtained and the sensor calibrated. Finally, an application using this sensor in testing laboratory lightning impulses and in measuring transient electric fields during switch-on of a disconnector confirmed that the sensor is expected to find widespread use in transient intense electric field measurement applications.

  8. Certification Testing Methodology for Composite Structure. Volume 2. Methodology Development

    DTIC Science & Technology

    1986-10-01

    parameter, sample size and fa- tigue test duration. The required input are 1. Residual strength Weibull shape parameter ( ALPR ) 2. Fatigue life Weibull shape...INPUT STRENGTH ALPHA’) READ(*,*) ALPR ALPRI = 1.O/ ALPR WRITE(*, 2) 2 FORMAT( 2X, ’PLEASE INPUT LIFE ALPHA’) READ(*,*) ALPL ALPLI - 1.0/ALPL WRITE(*, 3...3 FORMAT(2X,’PLEASE INPUT SAMPLE SIZE’) READ(*,*) N AN - N WRITE(*,4) 4 FORMAT(2X,’PLEASE INPUT TEST DURATION’) READ(*,*) T RALP - ALPL/ ALPR ARGR - 1

  9. Modeling the TrueBeam linac using a CAD to Geant4 geometry implementation: Dose and IAEA-compliant phase space calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Constantin, Magdalena; Perl, Joseph; LoSasso, Tom

    2011-07-15

    Purpose: To create an accurate 6 MV Monte Carlo simulation phase space for the Varian TrueBeam treatment head geometry imported from cad (computer aided design) without adjusting the input electron phase space parameters. Methods: geant4 v4.9.2.p01 was employed to simulate the 6 MV beam treatment head geometry of the Varian TrueBeam linac. The electron tracks in the linear accelerator were simulated with Parmela, and the obtained electron phase space was used as an input to the Monte Carlo beam transport and dose calculations. The geometry components are tessellated solids included in geant4 as gdml (generalized dynamic markup language) files obtainedmore » via STEP (standard for the exchange of product) export from Pro/Engineering, followed by STEP import in Fastrad, a STEP-gdml converter. The linac has a compact treatment head and the small space between the shielding collimator and the divergent arc of the upper jaws forbids the implementation of a plane for storing the phase space. Instead, an IAEA (International Atomic Energy Agency) compliant phase space writer was implemented on a cylindrical surface. The simulation was run in parallel on a 1200 node Linux cluster. The 6 MV dose calculations were performed for field sizes varying from 4 x 4 to 40 x 40 cm{sup 2}. The voxel size for the 60x60x40 cm{sup 3} water phantom was 4x4x4 mm{sup 3}. For the 10x10 cm{sup 2} field, surface buildup calculations were performed using 4x4x2 mm{sup 3} voxels within 20 mm of the surface. Results: For the depth dose curves, 98% of the calculated data points agree within 2% with the experimental measurements for depths between 2 and 40 cm. For depths between 5 and 30 cm, agreement within 1% is obtained for 99% (4x4), 95% (10x10), 94% (20x20 and 30x30), and 89% (40x40) of the data points, respectively. In the buildup region, the agreement is within 2%, except at 1 mm depth where the deviation is 5% for the 10x10 cm{sup 2} open field. For the lateral dose profiles, within the field size for fields up to 30x30 cm{sup 2}, the agreement is within 2% for depths up to 10 cm. At 20 cm depth, the in-field maximum dose difference for the 30x30 cm{sup 2} open field is within 4%, while the smaller field sizes agree within 2%. Outside the field size, agreement within 1% of the maximum dose difference is obtained for all fields. The calculated output factors varied from 0.938{+-}0.015 for the 4x4 cm{sup 2} field to 1.088{+-}0.024 for the 40x40 cm{sup 2} field. Their agreement with the experimental output factors is within 1%. Conclusions: The authors have validated a geant4 simulated IAEA-compliant phase space of the TrueBeam linac for the 6 MV beam obtained using a high accuracy geometry implementation from cad. These files are publicly available and can be used for further research.« less

  10. Light adaptation alters inner retinal inhibition to shape OFF retinal pathway signaling

    PubMed Central

    Mazade, Reece E.

    2016-01-01

    The retina adjusts its signaling gain over a wide range of light levels. A functional result of this is increased visual acuity at brighter luminance levels (light adaptation) due to shifts in the excitatory center-inhibitory surround receptive field parameters of ganglion cells that increases their sensitivity to smaller light stimuli. Recent work supports the idea that changes in ganglion cell spatial sensitivity with background luminance are due in part to inner retinal mechanisms, possibly including modulation of inhibition onto bipolar cells. To determine how the receptive fields of OFF cone bipolar cells may contribute to changes in ganglion cell resolution, the spatial extent and magnitude of inhibitory and excitatory inputs were measured from OFF bipolar cells under dark- and light-adapted conditions. There was no change in the OFF bipolar cell excitatory input with light adaptation; however, the spatial distributions of inhibitory inputs, including both glycinergic and GABAergic sources, became significantly narrower, smaller, and more transient. The magnitude and size of the OFF bipolar cell center-surround receptive fields as well as light-adapted changes in resting membrane potential were incorporated into a spatial model of OFF bipolar cell output to the downstream ganglion cells, which predicted an increase in signal output strength with light adaptation. We show a prominent role for inner retinal spatial signals in modulating the modeled strength of bipolar cell output to potentially play a role in ganglion cell visual sensitivity and acuity. PMID:26912599

  11. Classification of Maize and Weeds by Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Chapron, Michel; Oprea, Alina; Sultana, Bogdan; Assemat, Louis

    2007-11-01

    Precision Agriculture is concerned with all sorts of within-field variability, spatially and temporally, that reduces the efficacy of agronomic practices applied in a uniform way all over the field. Because of these sources of heterogeneity, uniform management actions strongly reduce the efficiency of the resource input to the crop (i.e. fertilization, water) or for the agrochemicals use for pest control (i.e. herbicide). Moreover, this low efficacy means high environmental cost (pollution) and reduced economic return for the farmer. Weed plants are one of these sources of variability for the crop, as they occur in patches in the field. Detecting the location, size and internal density of these patches, along with identification of main weed species involved, open the way to a site-specific weed control strategy, where only patches of weeds would receive the appropriate herbicide (type and dose). Herein, an automatic recognition method of vegetal species is described. First, the pixels of soil and vegetation are classified in two classes, then the vegetation part of the input image is segmented from the distance image by using the watershed method and finally the leaves of the vegetation are partitioned in two parts maize and weeds thanks to the two Bayesian networks.

  12. Studying focal ratio degradation of optical fibres with a core size of 50 μm for astronomy

    NASA Astrophysics Data System (ADS)

    Oliveira, A. C.; de Oliveira, L. S.; dos Santos, J. B.

    2005-01-01

    Along with the spectral attenuation properties, the focal ratio degradation (FRD) properties of optical fibres are the most important for instrumental applications in astronomy. We present a special study about the FRD of optical fibres with a core size of 50 μm to evaluate the effects of stress when mounting the fibre. Optical fibres like this were used to construct the Eucalyptus integral field unit. This fibre is very susceptible to the FRD effects, especially after the removal of the acrylate buffer. This operation is sometimes necessary to allow close packing of the fibres at the input to the spectrograph. Without the acrylate buffer, the protection of the cladding and core of the fibre may be easily damaged. In the near future, fibres of this size will be used to build the Southern Observatory for Astronomical Research (SOAR) integral field unit spectrograph (SIFS) and other instruments. It is important to understand the correct procedure which minimizes any increase in FRD during the construction of the instrument.

  13. Mean size estimation yields left-side bias: Role of attention on perceptual averaging.

    PubMed

    Li, Kuei-An; Yeh, Su-Ling

    2017-11-01

    The human visual system can estimate mean size of a set of items effectively; however, little is known about whether information on each visual field contributes equally to the mean size estimation. In this study, we examined whether a left-side bias (LSB)-perceptual judgment tends to depend more heavily on left visual field's inputs-affects mean size estimation. Participants were instructed to estimate the mean size of 16 spots. In half of the trials, the mean size of the spots on the left side was larger than that on the right side (the left-larger condition) and vice versa (the right-larger condition). Our results illustrated an LSB: A larger estimated mean size was found in the left-larger condition than in the right-larger condition (Experiment 1), and the LSB vanished when participants' attention was effectively cued to the right side (Experiment 2b). Furthermore, the magnitude of LSB increased with stimulus-onset asynchrony (SOA), when spots on the left side were presented earlier than the right side. In contrast, the LSB vanished and then induced a reversed effect with SOA when spots on the right side were presented earlier (Experiment 3). This study offers the first piece of evidence suggesting that LSB does have a significant influence on mean size estimation of a group of items, which is induced by a leftward attentional bias that enhances the prior entry effect on the left side.

  14. Electric generation and ratcheted transport of contact-charged drops

    NASA Astrophysics Data System (ADS)

    Cartier, Charles A.; Graybill, Jason R.; Bishop, Kyle J. M.

    2017-10-01

    We describe a simple microfluidic system that enables the steady generation and efficient transport of aqueous drops using only a constant voltage input. Drop generation is achieved through an electrohydrodynamic dripping mechanism by which conductive drops grow and detach from a grounded nozzle in response to an electric field. The now-charged drops are transported down a ratcheted channel by contact charge electrophoresis powered by the same voltage input used for drop generation. We investigate how the drop size, generation frequency, and transport velocity depend on system parameters such as the liquid viscosity, interfacial tension, applied voltage, and channel dimensions. The observed trends are well explained by a series of scaling analyses that provide insight into the dominant physical mechanisms underlying drop generation and ratcheted transport. We identify the conditions necessary for achieving reliable operation and discuss the various modes of failure that can arise when these conditions are violated. Our results demonstrate that simple electric inputs can power increasingly complex droplet operations with potential opportunities for inexpensive and portable microfluidic systems.

  15. Electric generation and ratcheted transport of contact-charged drops.

    PubMed

    Cartier, Charles A; Graybill, Jason R; Bishop, Kyle J M

    2017-10-01

    We describe a simple microfluidic system that enables the steady generation and efficient transport of aqueous drops using only a constant voltage input. Drop generation is achieved through an electrohydrodynamic dripping mechanism by which conductive drops grow and detach from a grounded nozzle in response to an electric field. The now-charged drops are transported down a ratcheted channel by contact charge electrophoresis powered by the same voltage input used for drop generation. We investigate how the drop size, generation frequency, and transport velocity depend on system parameters such as the liquid viscosity, interfacial tension, applied voltage, and channel dimensions. The observed trends are well explained by a series of scaling analyses that provide insight into the dominant physical mechanisms underlying drop generation and ratcheted transport. We identify the conditions necessary for achieving reliable operation and discuss the various modes of failure that can arise when these conditions are violated. Our results demonstrate that simple electric inputs can power increasingly complex droplet operations with potential opportunities for inexpensive and portable microfluidic systems.

  16. Enabling full-field physics-based optical proximity correction via dynamic model generation

    NASA Astrophysics Data System (ADS)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-07-01

    As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  17. The Synaptic and Morphological Basis of Orientation Selectivity in a Polyaxonal Amacrine Cell of the Rabbit Retina.

    PubMed

    Murphy-Baum, Benjamin L; Taylor, W Rowland

    2015-09-30

    Much of the computational power of the retina derives from the activity of amacrine cells, a large and diverse group of GABAergic and glycinergic inhibitory interneurons. Here, we identify an ON-type orientation-selective, wide-field, polyaxonal amacrine cell (PAC) in the rabbit retina and demonstrate how its orientation selectivity arises from the structure of the dendritic arbor and the pattern of excitatory and inhibitory inputs. Excitation from ON bipolar cells and inhibition arising from the OFF pathway converge to generate a quasi-linear integration of visual signals in the receptive field center. This serves to suppress responses to high spatial frequencies, thereby improving sensitivity to larger objects and enhancing orientation selectivity. Inhibition also regulates the magnitude and time course of excitatory inputs to this PAC through serial inhibitory connections onto the presynaptic terminals of ON bipolar cells. This presynaptic inhibition is driven by graded potentials within local microcircuits, similar in extent to the size of single bipolar cell receptive fields. Additional presynaptic inhibition is generated by spiking amacrine cells on a larger spatial scale covering several hundred microns. The orientation selectivity of this PAC may be a substrate for the inhibition that mediates orientation selectivity in some types of ganglion cells. Significance statement: The retina comprises numerous excitatory and inhibitory circuits that encode specific features in the visual scene, such as orientation, contrast, or motion. Here, we identify a wide-field inhibitory neuron that responds to visual stimuli of a particular orientation, a feature selectivity that is primarily due to the elongated shape of the dendritic arbor. Integration of convergent excitatory and inhibitory inputs from the ON and OFF visual pathways suppress responses to small objects and fine textures, thus enhancing selectivity for larger objects. Feedback inhibition regulates the strength and speed of excitation on both local and wide-field spatial scales. This study demonstrates how different synaptic inputs are regulated to tune a neuron to respond to specific features in the visual scene. Copyright © 2015 the authors 0270-6474/15/3513336-15$15.00/0.

  18. Indoor Spatial Updating with Reduced Visual Information

    PubMed Central

    Legge, Gordon E.; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M.

    2016-01-01

    Purpose Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Methods Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. Results With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. Discussion If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment. PMID:26943674

  19. Indoor Spatial Updating with Reduced Visual Information.

    PubMed

    Legge, Gordon E; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M

    2016-01-01

    Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment.

  20. Fogwater Inputs to a Cloud Forest in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Eugster, W.; Burkard, R.; Holwerda, F.; Bruijnzeel, S.; Scatena, F. N.; Siegwolf, R.

    2002-12-01

    Fog is highly persistent at upper elevations of humid tropical mountains and is an important pathway for water and nutrient inputs to mountain forest ecosystems. Measurements of fogwater fluxes were performed in the Luquillo mountains of Puerto Rico using the eddy covariance approach and a Caltech-type active strand cloudwater collector. Rainfall and throughfall were collected between 25 June--7 August 2002. Samples of fog, rain, stemflow and throughfall were analyzed for inorganic ion and stable isotope concentrations (δ18O and δD). Initial results indicate that fog inputs can occur during periods without rain and last for up to several days. The isotope ratios in rainwater and fogwater are rather similar, indicative of the proximity of the Carribbean Sea and the close interrelation between the origins of fog and rain at our experimental site. Largest differences in isotope ratios for fog were found between daytime convective and nighttime stable conditions. Throughfall was always exceeding rainfall, indicating (a) the relevance of fogwater inputs and (b) the potentially significant undersampling of rainfall due to relatively high wind speeds (5.7 m/s mean) and the exposition of our field site close to a mountain ridge. Our size-resolved measurements of cloud droplets (40 size bins between 2 and 50 μm aerodynamic diameter) indicate that the liquid water content of fog in the Luquillo mountains is 5 times higher than previously assumed, and thus does not differ from the values reported from other mountain ranges in other climate zones. Average deposition rates are 0.88 mm and 6.5 mm per day for fog and rain, respectively.

  1. Commissioning dose computation models for spot scanning proton beams in water for a commercially available treatment planning system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, X. R.; Poenisch, F.; Lii, M.

    2013-04-15

    Purpose: To present our method and experience in commissioning dose models in water for spot scanning proton therapy in a commercial treatment planning system (TPS). Methods: The input data required by the TPS included in-air transverse profiles and integral depth doses (IDDs). All input data were obtained from Monte Carlo (MC) simulations that had been validated by measurements. MC-generated IDDs were converted to units of Gy mm{sup 2}/MU using the measured IDDs at a depth of 2 cm employing the largest commercially available parallel-plate ionization chamber. The sensitive area of the chamber was insufficient to fully encompass the entire lateralmore » dose deposited at depth by a pencil beam (spot). To correct for the detector size, correction factors as a function of proton energy were defined and determined using MC. The fluence of individual spots was initially modeled as a single Gaussian (SG) function and later as a double Gaussian (DG) function. The DG fluence model was introduced to account for the spot fluence due to contributions of large angle scattering from the devices within the scanning nozzle, especially from the spot profile monitor. To validate the DG fluence model, we compared calculations and measurements, including doses at the center of spread out Bragg peaks (SOBPs) as a function of nominal field size, range, and SOBP width, lateral dose profiles, and depth doses for different widths of SOBP. Dose models were validated extensively with patient treatment field-specific measurements. Results: We demonstrated that the DG fluence model is necessary for predicting the field size dependence of dose distributions. With this model, the calculated doses at the center of SOBPs as a function of nominal field size, range, and SOBP width, lateral dose profiles and depth doses for rectangular target volumes agreed well with respective measured values. With the DG fluence model for our scanning proton beam line, we successfully treated more than 500 patients from March 2010 through June 2012 with acceptable agreement between TPS calculated and measured dose distributions. However, the current dose model still has limitations in predicting field size dependence of doses at some intermediate depths of proton beams with high energies. Conclusions: We have commissioned a DG fluence model for clinical use. It is demonstrated that the DG fluence model is significantly more accurate than the SG fluence model. However, some deficiencies in modeling the low-dose envelope in the current dose algorithm still exist. Further improvements to the current dose algorithm are needed. The method presented here should be useful for commissioning pencil beam dose algorithms in new versions of TPS in the future.« less

  2. Commissioning dose computation models for spot scanning proton beams in water for a commercially available treatment planning system

    PubMed Central

    Zhu, X. R.; Poenisch, F.; Lii, M.; Sawakuchi, G. O.; Titt, U.; Bues, M.; Song, X.; Zhang, X.; Li, Y.; Ciangaru, G.; Li, H.; Taylor, M. B.; Suzuki, K.; Mohan, R.; Gillin, M. T.; Sahoo, N.

    2013-01-01

    Purpose: To present our method and experience in commissioning dose models in water for spot scanning proton therapy in a commercial treatment planning system (TPS). Methods: The input data required by the TPS included in-air transverse profiles and integral depth doses (IDDs). All input data were obtained from Monte Carlo (MC) simulations that had been validated by measurements. MC-generated IDDs were converted to units of Gy mm2/MU using the measured IDDs at a depth of 2 cm employing the largest commercially available parallel-plate ionization chamber. The sensitive area of the chamber was insufficient to fully encompass the entire lateral dose deposited at depth by a pencil beam (spot). To correct for the detector size, correction factors as a function of proton energy were defined and determined using MC. The fluence of individual spots was initially modeled as a single Gaussian (SG) function and later as a double Gaussian (DG) function. The DG fluence model was introduced to account for the spot fluence due to contributions of large angle scattering from the devices within the scanning nozzle, especially from the spot profile monitor. To validate the DG fluence model, we compared calculations and measurements, including doses at the center of spread out Bragg peaks (SOBPs) as a function of nominal field size, range, and SOBP width, lateral dose profiles, and depth doses for different widths of SOBP. Dose models were validated extensively with patient treatment field-specific measurements. Results: We demonstrated that the DG fluence model is necessary for predicting the field size dependence of dose distributions. With this model, the calculated doses at the center of SOBPs as a function of nominal field size, range, and SOBP width, lateral dose profiles and depth doses for rectangular target volumes agreed well with respective measured values. With the DG fluence model for our scanning proton beam line, we successfully treated more than 500 patients from March 2010 through June 2012 with acceptable agreement between TPS calculated and measured dose distributions. However, the current dose model still has limitations in predicting field size dependence of doses at some intermediate depths of proton beams with high energies. Conclusions: We have commissioned a DG fluence model for clinical use. It is demonstrated that the DG fluence model is significantly more accurate than the SG fluence model. However, some deficiencies in modeling the low-dose envelope in the current dose algorithm still exist. Further improvements to the current dose algorithm are needed. The method presented here should be useful for commissioning pencil beam dose algorithms in new versions of TPS in the future. PMID:23556893

  3. Commissioning dose computation models for spot scanning proton beams in water for a commercially available treatment planning system.

    PubMed

    Zhu, X R; Poenisch, F; Lii, M; Sawakuchi, G O; Titt, U; Bues, M; Song, X; Zhang, X; Li, Y; Ciangaru, G; Li, H; Taylor, M B; Suzuki, K; Mohan, R; Gillin, M T; Sahoo, N

    2013-04-01

    To present our method and experience in commissioning dose models in water for spot scanning proton therapy in a commercial treatment planning system (TPS). The input data required by the TPS included in-air transverse profiles and integral depth doses (IDDs). All input data were obtained from Monte Carlo (MC) simulations that had been validated by measurements. MC-generated IDDs were converted to units of Gy mm(2)/MU using the measured IDDs at a depth of 2 cm employing the largest commercially available parallel-plate ionization chamber. The sensitive area of the chamber was insufficient to fully encompass the entire lateral dose deposited at depth by a pencil beam (spot). To correct for the detector size, correction factors as a function of proton energy were defined and determined using MC. The fluence of individual spots was initially modeled as a single Gaussian (SG) function and later as a double Gaussian (DG) function. The DG fluence model was introduced to account for the spot fluence due to contributions of large angle scattering from the devices within the scanning nozzle, especially from the spot profile monitor. To validate the DG fluence model, we compared calculations and measurements, including doses at the center of spread out Bragg peaks (SOBPs) as a function of nominal field size, range, and SOBP width, lateral dose profiles, and depth doses for different widths of SOBP. Dose models were validated extensively with patient treatment field-specific measurements. We demonstrated that the DG fluence model is necessary for predicting the field size dependence of dose distributions. With this model, the calculated doses at the center of SOBPs as a function of nominal field size, range, and SOBP width, lateral dose profiles and depth doses for rectangular target volumes agreed well with respective measured values. With the DG fluence model for our scanning proton beam line, we successfully treated more than 500 patients from March 2010 through June 2012 with acceptable agreement between TPS calculated and measured dose distributions. However, the current dose model still has limitations in predicting field size dependence of doses at some intermediate depths of proton beams with high energies. We have commissioned a DG fluence model for clinical use. It is demonstrated that the DG fluence model is significantly more accurate than the SG fluence model. However, some deficiencies in modeling the low-dose envelope in the current dose algorithm still exist. Further improvements to the current dose algorithm are needed. The method presented here should be useful for commissioning pencil beam dose algorithms in new versions of TPS in the future.

  4. Testing Models of Modern Glacial Erosion of the St. Elias Mountains, Alaska Using Marine Sediment Provenance

    NASA Astrophysics Data System (ADS)

    Penkrot, M. L.; Jaeger, J. M.; Loss, D. P.; Bruand, E.

    2015-12-01

    The glaciated coastal St. Elias Range in Alaska is a primary site to examine climate-tectonic interactions. Work has primarily focused on the Bering-Bagley and Malaspina-Seward ice fields, utilizing detrital and bedrock zircon and apatite geochronology to examine local exhumation and glacial erosion (Berger et al., 2008; Enkelmann et al., 2009; Headly et al., 2013). These studies argue for specific regions of tectonically focused or climatically widespread glacial erosion. Analyzed zircon and apatite grains are sand size, however glacial erosion favors the production of finer-grained sediments. This study focuses on the geochemical provenance of the silt-size fraction (15-63μm) of surface sediments collected throughout the Gulf of Alaska (GOA) seaward of the Bering and Malaspina glaciers to test if the exhumation patterns observed in zircon and apatites are also applicable for the silt size fraction. Onshore bedrock Al-normalized elemental data were used to delineate sediment sources, and a subset of provenance-applicable elements was chosen. Detrital thermochronologic data suggest that sediment produced by the Bagley/Bering system is derived from bedrock on the windward side with input from the Chugach Metamorphic Complex (CMC) underlying the Bagley only during glacial surge events (Headly et al., 2013). Geochemical observations of GOA silt deposited during the 1994-95 surge event confirm input of CMC sediment (elevated in Cr, Ni, Sc, Sr, depleted in Hf, Pb and Rb relative to Kultieth and Poul Creek formations). We also observe a windward-side sediment source (Kultieth and Poul Creek). It is hypothesized that the sediment carried by the Malaspina is primarily from CMC rock underlying the Seward ice field mixed with Yakataga formation rock that underlies the Seward throat (Headly et al., 2013). Geochemical observations of GOA silt support this hypothesis.

  5. 14C-labeled organic amendments: Characterization in different particle size fractions and humic acids in a long-term field experiment

    PubMed Central

    Tatzber, Michael; Stemmer, Michael; Spiegel, Heide; Katzlberger, Christian; Landstetter, Claudia; Haberhauer, Georg; Gerzabek, Martin H.

    2012-01-01

    Knowledge about the stabilization of organic matter input to soil is essential for understanding the influence of different agricultural practices on turnover characteristics in agricultural soil systems. In this study, soil samples from a long-term field experiment were separated into silt- and clay-sized particles. In 1967, 14C labeled farmyard manure was applied to three different cropping systems: crop rotation, monoculture and permanent bare fallow. Humic acids (HAs) were extracted from silt- and clay-sized fractions and characterized using photometry, mid-infrared and fluorescence spectroscopy. Remaining 14C was determined in size fractions as well as in their extracted HAs. Yields of carbon and remaining 14C in HAs from silt-sized particles and Corg in clay-sized particles decreased significantly in the order: crop rotation > monoculture ≫ bare fallow. Thus, crop rotation not only had the largest overall C-pool in the experiment, but it also best stabilized the added manure. Mid-infrared spectroscopy could distinguish between HAs from different particle size soil fractions. With spectroscopic methods significant differences between the cropping systems were detectable in fewer cases compared to quantitative results of HAs (yields, 14C, Corg and Nt). The trends, however, pointed towards increased humification of HAs from bare fallow systems compared to crop rotation and monoculture as well as of HAs from clay-sized particles compared to silt-sized particles. Our study clearly shows that the largest differences were observed between bare fallow on one hand and monoculture and crop rotation on the other. PMID:23482702

  6. 14C-labeled organic amendments: Characterization in different particle size fractions and humic acids in a long-term field experiment.

    PubMed

    Tatzber, Michael; Stemmer, Michael; Spiegel, Heide; Katzlberger, Christian; Landstetter, Claudia; Haberhauer, Georg; Gerzabek, Martin H

    2012-05-01

    Knowledge about the stabilization of organic matter input to soil is essential for understanding the influence of different agricultural practices on turnover characteristics in agricultural soil systems. In this study, soil samples from a long-term field experiment were separated into silt- and clay-sized particles. In 1967, 14 C labeled farmyard manure was applied to three different cropping systems: crop rotation, monoculture and permanent bare fallow. Humic acids (HAs) were extracted from silt- and clay-sized fractions and characterized using photometry, mid-infrared and fluorescence spectroscopy. Remaining 14 C was determined in size fractions as well as in their extracted HAs. Yields of carbon and remaining 14 C in HAs from silt-sized particles and C org in clay-sized particles decreased significantly in the order: crop rotation > monoculture ≫ bare fallow. Thus, crop rotation not only had the largest overall C-pool in the experiment, but it also best stabilized the added manure. Mid-infrared spectroscopy could distinguish between HAs from different particle size soil fractions. With spectroscopic methods significant differences between the cropping systems were detectable in fewer cases compared to quantitative results of HAs (yields, 14 C, C org and N t ). The trends, however, pointed towards increased humification of HAs from bare fallow systems compared to crop rotation and monoculture as well as of HAs from clay-sized particles compared to silt-sized particles. Our study clearly shows that the largest differences were observed between bare fallow on one hand and monoculture and crop rotation on the other.

  7. Biophysical Network Modelling of the dLGN Circuit: Different Effects of Triadic and Axonal Inhibition on Visual Responses of Relay Cells.

    PubMed

    Heiberg, Thomas; Hagen, Espen; Halnes, Geir; Einevoll, Gaute T

    2016-05-01

    Despite its prominent placement between the retina and primary visual cortex in the early visual pathway, the role of the dorsal lateral geniculate nucleus (dLGN) in molding and regulating the visual signals entering the brain is still poorly understood. A striking feature of the dLGN circuit is that relay cells (RCs) and interneurons (INs) form so-called triadic synapses, where an IN dendritic terminal can be simultaneously postsynaptic to a retinal ganglion cell (GC) input and presynaptic to an RC dendrite, allowing for so-called triadic inhibition. Taking advantage of a recently developed biophysically detailed multicompartmental model for an IN, we here investigate putative effects of these different inhibitory actions of INs, i.e., triadic inhibition and standard axonal inhibition, on the response properties of RCs. We compute and investigate so-called area-response curves, that is, trial-averaged visual spike responses vs. spot size, for circular flashing spots in a network of RCs and INs. The model parameters are grossly tuned to give results in qualitative accordance with previous in vivo data of responses to such stimuli for cat GCs and RCs. We particularly investigate how the model ingredients affect salient response properties such as the receptive-field center size of RCs and INs, maximal responses and center-surround antagonisms. For example, while triadic inhibition not involving firing of IN action potentials was found to provide only a non-linear gain control of the conversion of input spikes to output spikes by RCs, axonal inhibition was in contrast found to substantially affect the receptive-field center size: the larger the inhibition, the more the RC center size shrinks compared to the GC providing the feedforward excitation. Thus, a possible role of the different inhibitory actions from INs to RCs in the dLGN circuit is to provide separate mechanisms for overall gain control (direct triadic inhibition) and regulation of spatial resolution (axonal inhibition) of visual signals sent to cortex.

  8. Spatiotemporal profiles of receptive fields of neurons in the lateral posterior nucleus of the cat LP-pulvinar complex.

    PubMed

    Piché, Marilyse; Thomas, Sébastien; Casanova, Christian

    2015-10-01

    The pulvinar is the largest extrageniculate thalamic visual nucleus in mammals. It establishes reciprocal connections with virtually all visual cortexes and likely plays a role in transthalamic cortico-cortical communication. In cats, the lateral posterior nucleus (LP) of the LP-pulvinar complex can be subdivided in two subregions, the lateral (LPl) and medial (LPm) parts, which receive a predominant input from the striate cortex and the superior colliculus, respectively. Here, we revisit the receptive field structure of LPl and LPm cells in anesthetized cats by determining their first-order spatiotemporal profiles through reverse correlation analysis following sparse noise stimulation. Our data reveal the existence of previously unidentified receptive field profiles in the LP nucleus both in space and time domains. While some cells responded to only one stimulus polarity, the majority of neurons had receptive fields comprised of bright and dark responsive subfields. For these neurons, dark subfields' size was larger than that of bright subfields. A variety of receptive field spatial organization types were identified, ranging from totally overlapped to segregated bright and dark subfields. In the time domain, a large spectrum of activity overlap was found, from cells with temporally coinciding subfield activity to neurons with distinct, time-dissociated subfield peak activity windows. We also found LP neurons with space-time inseparable receptive fields and neurons with multiple activity periods. Finally, a substantial degree of homology was found between LPl and LPm first-order receptive field spatiotemporal profiles, suggesting a high integration of cortical and subcortical inputs within the LP-pulvinar complex. Copyright © 2015 the American Physiological Society.

  9. Program Predicts Performance of Optical Parametric Oscillators

    NASA Technical Reports Server (NTRS)

    Cross, Patricia L.; Bowers, Mark

    2006-01-01

    A computer program predicts the performances of solid-state lasers that operate at wavelengths from ultraviolet through mid-infrared and that comprise various combinations of stable and unstable resonators, optical parametric oscillators (OPOs), and sum-frequency generators (SFGs), including second-harmonic generators (SHGs). The input to the program describes the signal, idler, and pump beams; the SFG and OPO crystals; and the laser geometry. The program calculates the electric fields of the idler, pump, and output beams at three locations (inside the laser resonator, just outside the input mirror, and just outside the output mirror) as functions of time for the duration of the pump beam. For each beam, the electric field is used to calculate the fluence at the output mirror, plus summary parameters that include the centroid location, the radius of curvature of the wavefront leaving through the output mirror, the location and size of the beam waist, and a quantity known, variously, as a propagation constant or beam-quality factor. The program provides a typical Windows interface for entering data and selecting files. The program can include as many as six plot windows, each containing four graphs.

  10. Experimental Design and Data Analysis Issues Contribute to Inconsistent Results of C-Bouton Changes in Amyotrophic Lateral Sclerosis.

    PubMed

    Dukkipati, S Shekar; Chihi, Aouatef; Wang, Yiwen; Elbasiouny, Sherif M

    2017-01-01

    The possible presence of pathological changes in cholinergic synaptic inputs [cholinergic boutons (C-boutons)] is a contentious topic within the ALS field. Conflicting data reported on this issue makes it difficult to assess the roles of these synaptic inputs in ALS. Our objective was to determine whether the reported changes are truly statistically and biologically significant and why replication is problematic. This is an urgent question, as C-boutons are an important regulator of spinal motoneuron excitability, and pathological changes in motoneuron excitability are present throughout disease progression. Using male mice of the SOD1-G93A high-expresser transgenic ( G93A ) mouse model of ALS, we examined C-boutons on spinal motoneurons. We performed histological analysis at high statistical power, which showed no difference in C-bouton size in G93A versus wild-type motoneurons throughout disease progression. In an attempt to examine the underlying reasons for our failure to replicate reported changes, we performed further histological analyses using several variations on experimental design and data analysis that were reported in the ALS literature. This analysis showed that factors related to experimental design, such as grouping unit, sampling strategy, and blinding status, potentially contribute to the discrepancy in published data on C-bouton size changes. Next, we systematically analyzed the impact of study design variability and potential bias on reported results from experimental and preclinical studies of ALS. Strikingly, we found that practices such as blinding and power analysis are not systematically reported in the ALS field. Protocols to standardize experimental design and minimize bias are thus critical to advancing the ALS field.

  11. Analysis of low-offset CTIA amplifier for small-size-pixel infrared focal plane array

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Huang, Zhangcheng; Shao, Xiumei

    2014-11-01

    The design of input stage amplifier becomes more and more difficult as the expansion of format arrays and reduction of pixel size. A design method of low-offset amplifier based on 0.18-μm process used in small-size pixel is analyzed in order to decrease the dark signal of extended wavelength InGaAs infrared focal plane arrays (IRFPA). Based on an example of a cascode operational amplifier (op-amp), the relationship between input offset voltage and size of each transistor is discussed through theoretical analysis and Monte Carlo simulation. The results indicate that input transistors and load transistors have great influence on the input offset voltage while common-gate transistors are negligible. Furthermore, the offset voltage begins to increase slightly when the width and length of transistors decrease along with the diminution of pixel size, and raises rapidly when the size is smaller than a proximate threshold value. The offset voltage of preamplifiers with differential architecture and single-shared architecture in small pitch pixel are studied. After optimization under same conditions, simulation results show that single-shared architecture has smaller offset voltage than differential architecture.

  12. Focusing behavior of the fractal vector optical fields designed by fractal lattice growth model.

    PubMed

    Gao, Xu-Zhen; Pan, Yue; Zhao, Meng-Dan; Zhang, Guan-Lin; Zhang, Yu; Tu, Chenghou; Li, Yongnan; Wang, Hui-Tian

    2018-01-22

    We introduce a general fractal lattice growth model, significantly expanding the application scope of the fractal in the realm of optics. This model can be applied to construct various kinds of fractal "lattices" and then to achieve the design of a great diversity of fractal vector optical fields (F-VOFs) combinating with various "bases". We also experimentally generate the F-VOFs and explore their universal focusing behaviors. Multiple focal spots can be flexibly enginnered, and the optical tweezers experiment validates the simulated tight focusing fields, which means that this model allows the diversity of the focal patterns to flexibly trap and manipulate micrometer-sized particles. Furthermore, the recovery performance of the F-VOFs is also studied when the input fields and spatial frequency spectrum are obstructed, and the results confirm the robustness of the F-VOFs in both focusing and imaging processes, which is very useful in information transmission.

  13. Simulation model of erosion and deposition on a barchan dune

    NASA Technical Reports Server (NTRS)

    Howard, A. D.; Morton, J. B.; Gal-El-hak, M.; Pierce, D. B.

    1977-01-01

    Erosion and deposition over a barchan dune near the Salton Sea, California, are modeled by bookkeeping the quantity of sand in saltation following streamlines of transport. Field observations of near surface wind velocity and direction plus supplemental measurements of the velocity distribution over a scale model of the dune are combined as input to Bagnold type sand transport formulas corrected for slope effects. A unidirectional wind is assumed. The resulting patterns of erosion and deposition compare closely with those observed in the field and those predicted by the assumption of equilibrium (downwind translation of the dune without change in size or geometry). Discrepancies between the simulated results and the observed or predicted erosional patterns appear to be largely due to natural fluctuations in the wind direction. The shape of barchan dunes is a function of grain size, velocity, degree of saturation of the oncoming flow, and the variability in the direction of the oncoming wind. The size of the barchans may be controlled by natural atmospheric scales, by the age of the dunes, or by the upwind roughness. The upwind roughness can be controlled by fixed elements or by sand in the saltation. In the latter case, dune scale is determined by grain size and wind velocity.

  14. Mesoscale model for fission-induced recrystallization in U-7Mo alloy

    DOE PAGES

    Liang, Linyun; Mei, Zhi -Gang; Kim, Yeon Soo; ...

    2016-08-09

    A mesoscale model is developed by integrating the rate theory and phase-field models and is used to study the fission-induced recrystallization in U-7Mo alloy. The rate theory model is used to predict the dislocation density and the recrystallization nuclei density due to irradiation. The predicted fission rate and temperature dependences of the dislocation density are in good agreement with experimental measurements. This information is used as input for the multiphase phase-field model to investigate the fission-induced recrystallization kinetics. The simulated recrystallization volume fraction and bubble induced swelling agree well with experimental data. The effects of the fission rate, initial grainmore » size, and grain morphology on the recrystallization kinetics are discussed based on an analysis of recrystallization growth rate using the modified Avrami equation. Here, we conclude that the initial microstructure of the U-Mo fuels, especially the grain size, can be used to effectively control the rate of fission-induced recrystallization and therefore swelling.« less

  15. Quantifying alluvial fan sensitivity to climate in Death Valley, California, from field observations and numerical models

    NASA Astrophysics Data System (ADS)

    Brooke, Sam; Whittaker, Alexander; Armitage, John; D'Arcy, Mitch; Watkins, Stephen

    2017-04-01

    A quantitative understanding of landscape sensitivity to climate change remains a key challenge in the Earth Sciences. The stream-flow deposits of coupled catchment-fan systems offer one way to decode past changes in external boundary conditions as they comprise simple, closed systems that can be represented effectively by numerical models. Here we combine the collection and analysis of grain size data on well-dated alluvial fan surfaces in Death Valley, USA, with numerical modelling to address the extent to which sediment routing systems record high-frequency, high-magnitude climate change. We compile a new database of Holocene and Late-Pleistocene grain size trends from 11 alluvial fans in Death Valley, capturing high-resolution grain size data ranging from the Recent to 100 kyr in age. We hypothesise the observed changes in average surface grain size and fining rate over time are a record of landscape response to glacial-interglacial climatic forcing. With this data we are in a unique position to test the predictions of landscape evolution models and evaluate the extent to which climate change has influenced the volume and calibre of sediment deposited on alluvial fans. To gain insight into our field data and study area, we employ an appropriately-scaled catchment-fan model that calculates an eroded volumetric sediment budget to be deposited in a subsiding basin according to mass balance where grain size trends are predicted by a self-similarity fining model. We use the model to compare predicted trends in alluvial fan stratigraphy as a function of boundary condition change for a range of model parameters and input grain size distributions. Subsequently, we perturb our model with a plausible glacial-interglacial magnitude precipitation change to estimate the requisite sediment flux needed to generate observed field grain size trends in Death Valley. Modelled fluxes are then compared with independent measurements of sediment supply over time. Our results constitute one of the first attempts to combine the detailed collection of alluvial fan grain size data in time and space with coupled catchment-fan models, affording us the means to evaluate how well field and model data can be reconciled for simple sediment routing systems.

  16. Single phase bi-directional AC-DC converter with reduced passive components size and common mode electro-magnetic interference

    DOEpatents

    Mi, Chris; Li, Siqi

    2017-01-31

    A bidirectional AC-DC converter is presented with reduced passive component size and common mode electro-magnetic interference. The converter includes an improved input stage formed by two coupled differential inductors, two coupled common and differential inductors, one differential capacitor and two common mode capacitors. With this input structure, the volume, weight and cost of the input stage can be reduced greatly. Additionally, the input current ripple and common mode electro-magnetic interference can be greatly attenuated, so lower switching frequency can be adopted to achieve higher efficiency.

  17. Particle systems for adaptive, isotropic meshing of CAD models

    PubMed Central

    Levine, Joshua A.; Whitaker, Ross T.

    2012-01-01

    We present a particle-based approach for generating adaptive triangular surface and tetrahedral volume meshes from computer-aided design models. Input shapes are treated as a collection of smooth, parametric surface patches that can meet non-smoothly on boundaries. Our approach uses a hierarchical sampling scheme that places particles on features in order of increasing dimensionality. These particles reach a good distribution by minimizing an energy computed in 3D world space, with movements occurring in the parametric space of each surface patch. Rather than using a pre-computed measure of feature size, our system automatically adapts to both curvature as well as a notion of topological separation. It also enforces a measure of smoothness on these constraints to construct a sizing field that acts as a proxy to piecewise-smooth feature size. We evaluate our technique with comparisons against other popular triangular meshing techniques for this domain. PMID:23162181

  18. Soil organic carbon dynamics jointly controlled by climate, carbon inputs, soil properties and soil carbon fractions.

    PubMed

    Luo, Zhongkui; Feng, Wenting; Luo, Yiqi; Baldock, Jeff; Wang, Enli

    2017-10-01

    Soil organic carbon (SOC) dynamics are regulated by the complex interplay of climatic, edaphic and biotic conditions. However, the interrelation of SOC and these drivers and their potential connection networks are rarely assessed quantitatively. Using observations of SOC dynamics with detailed soil properties from 90 field trials at 28 sites under different agroecosystems across the Australian cropping regions, we investigated the direct and indirect effects of climate, soil properties, carbon (C) inputs and soil C pools (a total of 17 variables) on SOC change rate (r C , Mg C ha -1  yr -1 ). Among these variables, we found that the most influential variables on r C were the average C input amount and annual precipitation, and the total SOC stock at the beginning of the trials. Overall, C inputs (including C input amount and pasture frequency in the crop rotation system) accounted for 27% of the relative influence on r C , followed by climate 25% (including precipitation and temperature), soil C pools 24% (including pool size and composition) and soil properties (such as cation exchange capacity, clay content, bulk density) 24%. Path analysis identified a network of intercorrelations of climate, soil properties, C inputs and soil C pools in determining r C . The direct correlation of r C with climate was significantly weakened if removing the effects of soil properties and C pools, and vice versa. These results reveal the relative importance of climate, soil properties, C inputs and C pools and their complex interconnections in regulating SOC dynamics. Ignorance of the impact of changes in soil properties, C pool composition and C input (quantity and quality) on SOC dynamics is likely one of the main sources of uncertainty in SOC predictions from the process-based SOC models. © 2017 John Wiley & Sons Ltd.

  19. Quantum Monte Carlo calculations of two neutrons in finite volume

    DOE PAGES

    Klos, P.; Lynn, J. E.; Tews, I.; ...

    2016-11-18

    Ab initio calculations provide direct access to the properties of pure neutron systems that are challenging to study experimentally. In addition to their importance for fundamental physics, their properties are required as input for effective field theories of the strong interaction. In this work, we perform auxiliary-field diffusion Monte Carlo calculations of the ground state and first excited state of two neutrons in a finite box, considering a simple contact potential as well as chiral effective field theory interactions. We compare the results against exact diagonalizations and present a detailed analysis of the finite-volume effects, whose understanding is crucial formore » determining observables from the calculated energies. Finally, using the Lüscher formula, we extract the low-energy S-wave scattering parameters from ground- and excited-state energies for different box sizes.« less

  20. Rate-Compatible LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  1. Estimating Most Productive Scale Size in Data Envelopment Analysis with Integer Value Data

    NASA Astrophysics Data System (ADS)

    Dwi Sari, Yunita; Angria S, Layla; Efendi, Syahril; Zarlis, Muhammad

    2018-01-01

    The most productive scale size (MPSS) is a measurement that states how resources should be organized and utilized to achieve optimal results. The most productive scale size (MPSS) can be used as a benchmark for the success of an industry or company in producing goods or services. To estimate the most productive scale size (MPSS), each decision making unit (DMU) should pay attention the level of input-output efficiency, by data envelopment analysis (DEA) method decision making unit (DMU) can identify units used as references that can help to find the cause and solution from inefficiencies can optimize productivity that main advantage in managerial applications. Therefore, data envelopment analysis (DEA) is chosen to estimating most productive scale size (MPSS) that will focus on the input of integer value data with the CCR model and the BCC model. The purpose of this research is to find the best solution for estimating most productive scale size (MPSS) with input of integer value data in data envelopment analysis (DEA) method.

  2. Input-independent, Scalable and Fast String Matching on the Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Chavarría-Miranda, Daniel; Maschhoff, Kristyn J

    2009-05-25

    String searching is at the core of many security and network applications like search engines, intrusion detection systems, virus scanners and spam filters. The growing size of on-line content and the increasing wire speeds push the need for fast, and often real- time, string searching solutions. For these conditions, many software implementations (if not all) targeting conventional cache-based microprocessors do not perform well. They either exhibit overall low performance or exhibit highly variable performance depending on the types of inputs. For this reason, real-time state of the art solutions rely on the use of either custom hardware or Field-Programmable Gatemore » Arrays (FPGAs) at the expense of overall system flexibility and programmability. This paper presents a software based implementation of the Aho-Corasick string searching algorithm on the Cray XMT multithreaded shared memory machine. Our so- lution relies on the particular features of the XMT architecture and on several algorith- mic strategies: it is fast, scalable and its performance is virtually content-independent. On a 128-processor Cray XMT, it reaches a scanning speed of ≈ 28 Gbps with a performance variability below 10 %. In the 10 Gbps performance range, variability is below 2.5%. By comparison, an Intel dual-socket, 8-core system running at 2.66 GHz achieves a peak performance which varies from 500 Mbps to 10 Gbps depending on the type of input and dictionary size.« less

  3. Prediction of the thickness of the compensator filter in radiation therapy using computational intelligence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dehlaghi, Vahab; Taghipour, Mostafa; Haghparast, Abbas

    In this study, artificial neural networks (ANNs) and adaptive neuro-fuzzy inference system (ANFIS) are investigated to predict the thickness of the compensator filter in radiation therapy. In the proposed models, the input parameters are field size (S), off-axis distance, and relative dose (D/D{sub 0}), and the output is the thickness of the compensator. The obtained results show that the proposed ANN and ANFIS models are useful, reliable, and cheap tools to predict the thickness of the compensator filter in intensity-modulated radiation therapy.

  4. Focusing properties of arbitrary optical fields combining spiral phase and cylindrically symmetric state of polarization.

    PubMed

    Man, Zhongsheng; Bai, Zhidong; Zhang, Shuoshuo; Li, Jinjian; Li, Xiaoyu; Ge, Xiaolu; Zhang, Yuquan; Fu, Shenggui

    2018-06-01

    The tight focusing properties of optical fields combining a spiral phase and cylindrically symmetric state of polarization are presented. First, we theoretically analyze the mathematical characterization, Stokes parameters, and Poincaré sphere representations of arbitrary cylindrical vector (CV) vortex beams. Then, based on the vector diffraction theory, we derive and build an integrated analytical model to calculate the electromagnetic field and Poynting vector distributions of the input CV vortex beams. The calculations reveal that a generalized CV vortex beam can generate a sharper focal spot than that of a radially polarized (RP) plane beam in the focal plane. Besides, the focal size decrease accompanies its elongation along the optical axis. Hence, it seems that there is a trade-off between the transverse and axial resolutions. In addition, under the precondition that the absolute values between polarization order and topological charge are equal, a higher-order CV vortex can also achieve a smaller focal size than an RP plane beam. Further, the intensity for the sidelobe admits a significant suppression. To give a deep understanding of the peculiar focusing properties, the magnetic field and Poynting vector distributions are also demonstrated in detail. These properties may be helpful in applications such as optical trapping and manipulation of particles and superresolution microscopy imaging.

  5. An extended diffraction tomography method for quantifying structural damage using numerical Green's functions.

    PubMed

    Chan, Eugene; Rose, L R Francis; Wang, Chun H

    2015-05-01

    Existing damage imaging algorithms for detecting and quantifying structural defects, particularly those based on diffraction tomography, assume far-field conditions for the scattered field data. This paper presents a major extension of diffraction tomography that can overcome this limitation and utilises a near-field multi-static data matrix as the input data. This new algorithm, which employs numerical solutions of the dynamic Green's functions, makes it possible to quantitatively image laminar damage even in complex structures for which the dynamic Green's functions are not available analytically. To validate this new method, the numerical Green's functions and the multi-static data matrix for laminar damage in flat and stiffened isotropic plates are first determined using finite element models. Next, these results are time-gated to remove boundary reflections, followed by discrete Fourier transform to obtain the amplitude and phase information for both the baseline (damage-free) and the scattered wave fields. Using these computationally generated results and experimental verification, it is shown that the new imaging algorithm is capable of accurately determining the damage geometry, size and severity for a variety of damage sizes and shapes, including multi-site damage. Some aspects of minimal sensors requirement pertinent to image quality and practical implementation are also briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Series-Connected Buck Boost Regulators

    NASA Technical Reports Server (NTRS)

    Birchenough, Arthur G.

    2005-01-01

    A series-connected buck boost regulator (SCBBR) is an electronic circuit that bucks a power-supply voltage to a lower regulated value or boosts it to a higher regulated value. The concept of the SCBBR is a generalization of the concept of the SCBR, which was reported in "Series-Connected Boost Regulators" (LEW-15918), NASA Tech Briefs, Vol. 23, No. 7 (July 1997), page 42. Relative to prior DC-voltage-regulator concepts, the SCBBR concept can yield significant reductions in weight and increases in power-conversion efficiency in many applications in which input/output voltage ratios are relatively small and isolation is not required, as solar-array regulation or battery charging with DC-bus regulation. Usually, a DC voltage regulator is designed to include a DC-to-DC converter to reduce its power loss, size, and weight. Advances in components, increases in operating frequencies, and improved circuit topologies have led to continual increases in efficiency and/or decreases in the sizes and weights of DC voltage regulators. The primary source of inefficiency in the DC-to-DC converter portion of a voltage regulator is the conduction loss and, especially at high frequencies, the switching loss. Although improved components and topology can reduce the switching loss, the reduction is limited by the fact that the converter generally switches all the power being regulated. Like the SCBR concept, the SCBBR concept involves a circuit configuration in which only a fraction of the power is switched, so that the switching loss is reduced by an amount that is largely independent of the specific components and circuit topology used. In an SCBBR, the amount of power switched by the DC-to-DC converter is only the amount needed to make up the difference between the input and output bus voltage. The remaining majority of the power passes through the converter without being switched. The weight and power loss of a DC-to-DC converter are determined primarily by the amount of power processed. In the SCBBR, the unswitched majority of the power is passed through with very little power loss, and little if any increase in the sizes of the converter components is needed to enable the components to handle the unswitched power. As a result, the power-conversion efficiency of the regulator can be very high, as shown in the example of Figure 1. A basic SCBBR includes a DC-to-DC converter (see Figure 2). The switches and primary winding of a transformer in the converter is connected across the input bus, while the secondary winding and switches are connected in series with the output bus, so that the output voltage is the sum of the input voltage and the secondary voltage of the converter. In the breadboard SCBBR, the input voltage applied to the primary winding is switched by use of metal oxide/semiconductor field-effect transistors (MOSFETs) in a full bridge circuit; the secondary winding is center-tapped, with two MOSFET switches and diode rectifiers connected in opposed series in each leg. The sets of opposed switches and rectifiers are what enable operation in either a boost or a buck mode. In the boost mode, input voltage and current, and the output voltage and current are all positive; that is, the secondary voltage is added to the input voltage and the net output voltage can be regulated at a value equal or greater than the input voltage. In the buck mode, input voltage is still positive and the current still flows in the same direction in the secondary, but the switches are controlled such that some power flows from the secondary to the primary. The voltage across the secondary and the current into the primary are reversed. The result is that the output voltage is lower than the input voltage, and some power is recirculated from the converter secondary back to the input. Quantitatively, the advantage of an SCBBR is a direct function of the regulation range required. If, for example, a regulation range of 20 percent is required for a 500-W supply, th it suffices to design the DC-to-DC converter in the SCBBR for a power rating of only 100 W. The switching loss and size are much smaller than those of a conventional regulator that must be rated for switching of all 500 W. The reduction in size and the increase in efficiency are not directly proportional to switched-power ratio of 5:1 because the additional switches contribute some conduction loss and the input and output filters must be larger than those typically required for a 100-W converter. Nevertheless, the power loss and the size can be much smaller than those of a 500-W converter.

  7. Computer programs for computing particle-size statistics of fluvial sediments

    USGS Publications Warehouse

    Stevens, H.H.; Hubbell, D.W.

    1986-01-01

    Two versions of computer programs for inputing data and computing particle-size statistics of fluvial sediments are presented. The FORTRAN 77 language versions are for use on the Prime computer, and the BASIC language versions are for use on microcomputers. The size-statistics program compute Inman, Trask , and Folk statistical parameters from phi values and sizes determined for 10 specified percent-finer values from inputed size and percent-finer data. The program also determines the percentage gravel, sand, silt, and clay, and the Meyer-Peter effective diameter. Documentation and listings for both versions of the programs are included. (Author 's abstract)

  8. Integration of measurements with atmospheric dispersion models: Source term estimation for dispersal of (239)Pu due to non-nuclear detonation of high explosive

    NASA Astrophysics Data System (ADS)

    Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.

    1992-10-01

    The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.

  9. Experience-dependent shaping of hippocampal CA1 intracellular activity in novel and familiar environments

    PubMed Central

    Cohen, Jeremy D; Bolstad, Mark; Lee, Albert K

    2017-01-01

    The hippocampus is critical for producing stable representations of familiar spaces. How these representations arise is poorly understood, largely because changes to hippocampal inputs have not been measured during spatial learning. Here, using intracellular recording, we monitored inputs and plasticity-inducing complex spikes (CSs) in CA1 neurons while mice explored novel and familiar virtual environments. Inputs driving place field spiking increased in amplitude – often suddenly – during novel environment exploration. However, these increases were not sustained in familiar environments. Rather, the spatial tuning of inputs became increasingly similar across repeated traversals of the environment with experience – both within fields and throughout the whole environment. In novel environments, CSs were not necessary for place field formation. Our findings support a model in which initial inhomogeneities in inputs are amplified to produce robust place field activity, then plasticity refines this representation into one with less strongly modulated, but more stable, inputs for long-term storage. DOI: http://dx.doi.org/10.7554/eLife.23040.001 PMID:28742496

  10. Global Play Evaluation TOol (GPETO) assists Mobil explorationists with play evaluation and ranking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Withers, K.D.; Brown, P.J.; Clary, R.C.

    1996-01-01

    GPETO is a relational database and application containing information about over 2500 plays around the world. It also has information about approximately 30,000 fields and the related provinces. The GPETO application has been developed to assist Mobil geoscientists, planners and managers with global play evaluations and portfolio management. The, main features of GPETO allow users to: (1) view or modify play and province information, (2) composite user specified plays in a statistically valid way, (3) view threshold information for plays and provinces, including curves, (4) examine field size data, including discovered, future and ultimate field sizes for provinces and plays,more » (5) use a database browser to lookup and validate data by geographic, volumetric, technical and business criteria, (6) display ranged values and graphical displays of future and ultimate potential for plays, provinces, countries, and continents, (7) run, view and print a number of informative reports containing input and output data from the system. The GPETO application is written in c and fortran, runs on a unix based system, utilizes an Ingres database, and was implemented using a 3-tiered client/server architecture.« less

  11. Global Play Evaluation TOol (GPETO) assists Mobil explorationists with play evaluation and ranking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Withers, K.D.; Brown, P.J.; Clary, R.C.

    1996-12-31

    GPETO is a relational database and application containing information about over 2500 plays around the world. It also has information about approximately 30,000 fields and the related provinces. The GPETO application has been developed to assist Mobil geoscientists, planners and managers with global play evaluations and portfolio management. The, main features of GPETO allow users to: (1) view or modify play and province information, (2) composite user specified plays in a statistically valid way, (3) view threshold information for plays and provinces, including curves, (4) examine field size data, including discovered, future and ultimate field sizes for provinces and plays,more » (5) use a database browser to lookup and validate data by geographic, volumetric, technical and business criteria, (6) display ranged values and graphical displays of future and ultimate potential for plays, provinces, countries, and continents, (7) run, view and print a number of informative reports containing input and output data from the system. The GPETO application is written in c and fortran, runs on a unix based system, utilizes an Ingres database, and was implemented using a 3-tiered client/server architecture.« less

  12. Computing Gravitational Fields of Finite-Sized Bodies

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco

    2005-01-01

    A computer program utilizes the classical theory of gravitation, implemented by means of the finite-element method, to calculate the near gravitational fields of bodies of arbitrary size, shape, and mass distribution. The program was developed for application to a spacecraft and to floating proof masses and associated equipment carried by the spacecraft for detecting gravitational waves. The program can calculate steady or time-dependent gravitational forces, moments, and gradients thereof. Bodies external to a proof mass can be moving around the proof mass and/or deformed under thermoelastic loads. An arbitrarily shaped proof mass is represented by a collection of parallelepiped elements. The gravitational force and moment acting on each parallelepiped element of a proof mass, including those attributable to the self-gravitational field of the proof mass, are computed exactly from the closed-form equation for the gravitational potential of a parallelepiped. The gravitational field of an arbitrary distribution of mass external to a proof mass can be calculated either by summing the fields of suitably many point masses or by higher-order Gauss-Legendre integration over all elements surrounding the proof mass that are part of a finite-element mesh. This computer program is compatible with more general finite-element codes, such as NASTRAN, because it is configured to read a generic input data file, containing the detailed description of the finiteelement mesh.

  13. Cavity electromagnetically induced transparency with Rydberg atoms

    NASA Astrophysics Data System (ADS)

    Bakar Ali, Abu; Ziauddin

    2018-02-01

    Cavity electromagnetically induced transparency (EIT) is revisited via the input probe field intensity. A strongly interacting Rydberg atomic medium ensemble is considered in a cavity, where atoms behave as superatoms (SAs) under the dipole blockade mechanism. Each atom in the strongly interacting Rydberg atomic medium (87 Rb) follows a three-level cascade atomic configuration. A strong control and weak probe field are employed in the cavity with the ensemble of Rydberg atoms. The features of the reflected and transmitted probe light are studied under the influence of the input probe field intensity. A transparency peak (cavity EIT) is revealed at a resonance condition for small values of input probe field intensity. The manipulation of the cavity EIT is reported by tuning the strength of the input probe field intensity. Further, the phase and group delay of the transmitted and reflected probe light are studied. It is found that group delay and phase in the reflected light are negative, while for the transmitted light they are positive. The magnitude control of group delay in the transmitted and reflected light is investigated via the input probe field intensity.

  14. Responses of two nonlinear microbial models to warming and increased carbon input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Y. P.; Jiang, J.; Chen-Charpentier, Benito

    A number of nonlinear microbial models of soil carbon decomposition have been developed. Some of them have been applied globally but have yet to be shown to realistically represent soil carbon dynamics in the field. A thorough analysis of their key differences is needed to inform future model developments. In this paper, we compare two nonlinear microbial models of soil carbon decomposition: one based on reverse Michaelis–Menten kinetics (model A) and the other on regular Michaelis–Menten kinetics (model B). Using analytic approximations and numerical solutions, we find that the oscillatory responses of carbon pools to a small perturbation in theirmore » initial pool sizes dampen faster in model A than in model B. Soil warming always decreases carbon storage in model A, but in model B it predominantly decreases carbon storage in cool regions and increases carbon storage in warm regions. For both models, the CO 2 efflux from soil carbon decomposition reaches a maximum value some time after increased carbon input (as in priming experiments). This maximum CO 2 efflux (F max) decreases with an increase in soil temperature in both models. However, the sensitivity of F max to the increased amount of carbon input increases with soil temperature in model A but decreases monotonically with an increase in soil temperature in model B. These differences in the responses to soil warming and carbon input between the two nonlinear models can be used to discern which model is more realistic when compared to results from field or laboratory experiments. Lastly, these insights will contribute to an improved understanding of the significance of soil microbial processes in soil carbon responses to future climate change.« less

  15. Responses of two nonlinear microbial models to warming and increased carbon input

    DOE PAGES

    Wang, Y. P.; Jiang, J.; Chen-Charpentier, Benito; ...

    2016-02-18

    A number of nonlinear microbial models of soil carbon decomposition have been developed. Some of them have been applied globally but have yet to be shown to realistically represent soil carbon dynamics in the field. A thorough analysis of their key differences is needed to inform future model developments. In this paper, we compare two nonlinear microbial models of soil carbon decomposition: one based on reverse Michaelis–Menten kinetics (model A) and the other on regular Michaelis–Menten kinetics (model B). Using analytic approximations and numerical solutions, we find that the oscillatory responses of carbon pools to a small perturbation in theirmore » initial pool sizes dampen faster in model A than in model B. Soil warming always decreases carbon storage in model A, but in model B it predominantly decreases carbon storage in cool regions and increases carbon storage in warm regions. For both models, the CO 2 efflux from soil carbon decomposition reaches a maximum value some time after increased carbon input (as in priming experiments). This maximum CO 2 efflux (F max) decreases with an increase in soil temperature in both models. However, the sensitivity of F max to the increased amount of carbon input increases with soil temperature in model A but decreases monotonically with an increase in soil temperature in model B. These differences in the responses to soil warming and carbon input between the two nonlinear models can be used to discern which model is more realistic when compared to results from field or laboratory experiments. Lastly, these insights will contribute to an improved understanding of the significance of soil microbial processes in soil carbon responses to future climate change.« less

  16. A Theoretical Investigation of the Input Characteristics of a Rectangular Cavity-Backed Slot Antenna

    NASA Technical Reports Server (NTRS)

    Cockrell, C. R.

    1975-01-01

    Equations which represent the magnetic and electric stored energies are derived for an infinite section of rectangular waveguide and a rectangular cavity. These representations which are referred to as being physically observable are obtained by considering the difference in the volume integrals appearing in the complex Poynting theorem. It is shown that the physically observable stored energies are determined by the field components that vanish in a reference plane outside the aperture. These physically observable representations are used to compute the input admittance of a rectangular cavity-backed slot antenna in which a single propagating wave is assumed to exist in the cavity. The slot is excited by a voltage source connected across its center; a sinusoidal distribution is assumed in the slot. Input-admittance calculations are compared with measured data. In addition, input-admittance curves as a function of electrical slot length are presented for several size cavities. For the rectangular cavity backed slot antenna, the quality factor and relative bandwidth were computed independently by using these energy relationships. It is shown that the asymptotic relationship which is usually assumed to exist between the quality bandwidth and the reciprocal of relative bandwidth is equally valid for the rectangular cavity backed slot antenna.

  17. Investigating Uncertainty in Predicting Carbon Dynamics in North American Biomes: Putting Support-Effect Bias in Perspective

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Brass, Jim (Technical Monitor)

    2001-01-01

    A fundamental strategy in NASA's Earth Observing System's (EOS) monitoring of vegetation and its contribution to the global carbon cycle is to rely on deterministic, process-based ecosystem models to make predictions of carbon flux over large regions. These models are parameterized (that is, the input variables are derived) using remotely sensed images such as those from the Moderate Resolution Imaging Spectroradiometer (MODIS), ground measurements and interpolated maps. Since early applications of these models, investigators have noted that results depend partly on the spatial support of the input variables. In general, the larger the support of the input data, the greater the chance that the effects of important components of the ecosystem will be averaged out. A review of previous work shows that using large supports can cause either positive or negative bias in carbon flux predictions. To put the magnitude and direction of these biases in perspective, we must quantify the range of uncertainty on our best measurements of carbon-related variables made on equivalent areas. In other words, support-effect bias should be placed in the context of prediction uncertainty from other sources. If the range of uncertainty at the smallest support is less than the support-effect bias, more research emphasis should probably be placed on support sizes that are intermediate between those of field measurements and MODIS. If the uncertainty range at the smallest support is larger than the support-effect bias, the accuracy of MODIS-based predictions will be difficult to quantify and more emphasis should be placed on field-scale characterization and sampling. This talk will describe methods to address these issues using a field measurement campaign in North America and "upscaling" using geostatistical estimation and simulation.

  18. Cloud Intrusion Detection and Repair (CIDAR)

    DTIC Science & Technology

    2016-02-01

    form for VLC , Swftools-png2swf, Swftools-jpeg2swf, Dillo and GIMP. The superscript indicates the bit width of each expression atom. “sext(v, w... challenges in input rectification is the need to deal with nested fields. In general, input formats are in tree structures containing arbitrarily...length indicator constraints is challeng - ing, because of the presence of nested fields in hierarchical input format. For example, an integer field may

  19. Total ionizing dose effect in an input/output device for flash memory

    NASA Astrophysics Data System (ADS)

    Liu, Zhang-Li; Hu, Zhi-Yuan; Zhang, Zheng-Xuan; Shao, Hua; Chen, Ming; Bi, Da-Wei; Ning, Bing-Xu; Zou, Shi-Chang

    2011-12-01

    Input/output devices for flash memory are exposed to gamma ray irradiation. Total ionizing dose has been shown great influence on characteristic degradation of transistors with different sizes. In this paper, we observed a larger increase of off-state leakage in the short channel device than in long one. However, a larger threshold voltage shift is observed for the narrow width device than for the wide one, which is well known as the radiation induced narrow channel effect. The radiation induced charge in the shallow trench isolation oxide influences the electric field of the narrow channel device. Also, the drain bias dependence of the off-state leakage after irradiation is observed, which is called the radiation enhanced drain induced barrier lowing effect. Finally, we found that substrate bias voltage can suppress the off-state leakage, while leading to more obvious hump effect.

  20. Electro-optic high voltage sensor

    DOEpatents

    Davidson, James R.; Seifert, Gary D.

    2003-09-16

    A small sized electro-optic voltage sensor capable of accurate measurement of high voltages without contact with a conductor or voltage source is provided. When placed in the presence of an electric field, the sensor receives an input beam of electromagnetic radiation. A polarization beam displacer separates the input beam into two beams with orthogonal linear polarizations and causes one linearly polarized beam to impinge a crystal at a desired angle independent of temperature. The Pockels effect elliptically polarizes the beam as it travels through the crystal. A reflector redirects the beam back through the crystal and the beam displacer. On the return path, the polarization beam displacer separates the elliptically polarized beam into two output beams of orthogonal linear polarization. The system may include a detector for converting the output beams into electrical signals and a signal processor for determining the voltage based on an analysis of the output beams.

  1. All-optical differential equation solver with constant-coefficient tunable based on a single microring resonator.

    PubMed

    Yang, Ting; Dong, Jianji; Lu, Liangjun; Zhou, Linjie; Zheng, Aoling; Zhang, Xinliang; Chen, Jianping

    2014-07-04

    Photonic integrated circuits for photonic computing open up the possibility for the realization of ultrahigh-speed and ultra wide-band signal processing with compact size and low power consumption. Differential equations model and govern fundamental physical phenomena and engineering systems in virtually any field of science and engineering, such as temperature diffusion processes, physical problems of motion subject to acceleration inputs and frictional forces, and the response of different resistor-capacitor circuits, etc. In this study, we experimentally demonstrate a feasible integrated scheme to solve first-order linear ordinary differential equation with constant-coefficient tunable based on a single silicon microring resonator. Besides, we analyze the impact of the chirp and pulse-width of input signals on the computing deviation. This device can be compatible with the electronic technology (typically complementary metal-oxide semiconductor technology), which may motivate the development of integrated photonic circuits for optical computing.

  2. All-optical differential equation solver with constant-coefficient tunable based on a single microring resonator

    PubMed Central

    Yang, Ting; Dong, Jianji; Lu, Liangjun; Zhou, Linjie; Zheng, Aoling; Zhang, Xinliang; Chen, Jianping

    2014-01-01

    Photonic integrated circuits for photonic computing open up the possibility for the realization of ultrahigh-speed and ultra wide-band signal processing with compact size and low power consumption. Differential equations model and govern fundamental physical phenomena and engineering systems in virtually any field of science and engineering, such as temperature diffusion processes, physical problems of motion subject to acceleration inputs and frictional forces, and the response of different resistor-capacitor circuits, etc. In this study, we experimentally demonstrate a feasible integrated scheme to solve first-order linear ordinary differential equation with constant-coefficient tunable based on a single silicon microring resonator. Besides, we analyze the impact of the chirp and pulse-width of input signals on the computing deviation. This device can be compatible with the electronic technology (typically complementary metal-oxide semiconductor technology), which may motivate the development of integrated photonic circuits for optical computing. PMID:24993440

  3. Modified centroid for estimating sand, silt, and clay from soil texture class

    USDA-ARS?s Scientific Manuscript database

    Models that require inputs of soil particle size commonly use soil texture class for input; however, texture classes do not represent the continuum of soil size fractions. Soil texture class and clay percentage are collected as a standard practice for many land management agencies (e.g., NRCS, BLM, ...

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basak, Sushovan, E-mail: sushovanbasak@gmail.com; Das, Hrishikesh, E-mail: hrishichem@gmail.com; Pal, Tapan Kumar, E-mail: tkpal.ju@gmail.com

    In order to meet the demand for lighter and more fuel efficient vehicles, a significant attempt is currently being focused toward the substitution of aluminum for steel in the car body structure. It generates vital challenge with respect to the methods of joining to be used for fabrication. However, the conventional fusion joining has its own difficulty owing to formation of the brittle intermetallic phases. In this present study AA6061-T6 of 2 mm and HIF-GA steel sheet of 1 mm thick are metal inert gas (MIG) brazed with 0.8 mm Al–5Si filler wire under three different heat inputs. The effectmore » of the heat inputs on bead geometry, microstructure and joint properties of MIG brazed Al-steel joints were exclusively studied and characterized by X-ray diffraction, field emission scanning electron microscopy (FESEM), electron probe micro analyzer (EPMA) and high resolution transmission electron microscopy (HRTEM) assisted X-ray spectroscopy (EDS) and selective area diffraction pattern. Finally microstructures were correlated with the performance of the joint. Diffusion induced intermetallic thickness measured by FESEM image and concentration profile agreed well with the numerically calculated one. HRTEM assisted EDS study was used to identify the large size FeAl{sub 3} and small size Fe{sub 2}Al{sub 5} type intermetallic compounds at the interface. The growth of these two phases in A2 (heat input: 182 J mm{sup −1}) is attributed to the slower cooling rate with higher diffusion time (~ 61 s) along the interface in comparison to the same for A1 (heat input: 155 J mm{sup −1}) with faster cooling rate and shorter diffusion time (~ 24 s). The joint efficiency as high as 65% of steel base metal is achieved for A2 which is the optimized parameter in the present study. - Highlights: • AA 6061 and HIF-GA could be successfully joined by MIG brazing. • Intermetallics are exclusively studied and characterized by XRD, FESEM and EPMA. • Intermetallic formation by diffusion is worth considering or not. • HRTEM-EDS, SAD pattern identifies the morphologies and size of intermetallics. • A compromise concerning formation of IMC is necessary.« less

  5. Input current shaped ac-to-dc converters

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Input current shaping techniques for ac-to-dc converters were investigated. Input frequencies much higher than normal, up to 20 kHz were emphasized. Several methods of shaping the input current waveform in ac-to-dc converters were reviewed. The simplest method is the LC filter following the rectifier. The next simplest method is the resistor emulation approach in which the inductor size is determined by the converter switching frequency and not by the line input frequency. Other methods require complicated switch drive algorithms to construct the input current waveshape. For a high-frequency line input, on the order of 20 kHz, the simple LC cannot be discarded so peremptorily, since the inductor size can be compared with that for the resistor emulation method. In fact, since a dc regulator will normally be required after the filter anyway, the total component count is almost the same as for the resistor emulation method, in which the filter is effectively incorporated into the regulator.

  6. Extending Integrate-and-Fire Model Neurons to Account for the Effects of Weak Electric Fields and Input Filtering Mediated by the Dendrite.

    PubMed

    Aspart, Florian; Ladenbauer, Josef; Obermayer, Klaus

    2016-11-01

    Transcranial brain stimulation and evidence of ephaptic coupling have recently sparked strong interests in understanding the effects of weak electric fields on the dynamics of brain networks and of coupled populations of neurons. The collective dynamics of large neuronal populations can be efficiently studied using single-compartment (point) model neurons of the integrate-and-fire (IF) type as their elements. These models, however, lack the dendritic morphology required to biophysically describe the effect of an extracellular electric field on the neuronal membrane voltage. Here, we extend the IF point neuron models to accurately reflect morphology dependent electric field effects extracted from a canonical spatial "ball-and-stick" (BS) neuron model. Even in the absence of an extracellular field, neuronal morphology by itself strongly affects the cellular response properties. We, therefore, derive additional components for leaky and nonlinear IF neuron models to reproduce the subthreshold voltage and spiking dynamics of the BS model exposed to both fluctuating somatic and dendritic inputs and an extracellular electric field. We show that an oscillatory electric field causes spike rate resonance, or equivalently, pronounced spike to field coherence. Its resonance frequency depends on the location of the synaptic background inputs. For somatic inputs the resonance appears in the beta and gamma frequency range, whereas for distal dendritic inputs it is shifted to even higher frequencies. Irrespective of an external electric field, the presence of a dendritic cable attenuates the subthreshold response at the soma to slowly-varying somatic inputs while implementing a low-pass filter for distal dendritic inputs. Our point neuron model extension is straightforward to implement and is computationally much more efficient compared to the original BS model. It is well suited for studying the dynamics of large populations of neurons with heterogeneous dendritic morphology with (and without) the influence of weak external electric fields.

  7. Extending Integrate-and-Fire Model Neurons to Account for the Effects of Weak Electric Fields and Input Filtering Mediated by the Dendrite

    PubMed Central

    Obermayer, Klaus

    2016-01-01

    Transcranial brain stimulation and evidence of ephaptic coupling have recently sparked strong interests in understanding the effects of weak electric fields on the dynamics of brain networks and of coupled populations of neurons. The collective dynamics of large neuronal populations can be efficiently studied using single-compartment (point) model neurons of the integrate-and-fire (IF) type as their elements. These models, however, lack the dendritic morphology required to biophysically describe the effect of an extracellular electric field on the neuronal membrane voltage. Here, we extend the IF point neuron models to accurately reflect morphology dependent electric field effects extracted from a canonical spatial “ball-and-stick” (BS) neuron model. Even in the absence of an extracellular field, neuronal morphology by itself strongly affects the cellular response properties. We, therefore, derive additional components for leaky and nonlinear IF neuron models to reproduce the subthreshold voltage and spiking dynamics of the BS model exposed to both fluctuating somatic and dendritic inputs and an extracellular electric field. We show that an oscillatory electric field causes spike rate resonance, or equivalently, pronounced spike to field coherence. Its resonance frequency depends on the location of the synaptic background inputs. For somatic inputs the resonance appears in the beta and gamma frequency range, whereas for distal dendritic inputs it is shifted to even higher frequencies. Irrespective of an external electric field, the presence of a dendritic cable attenuates the subthreshold response at the soma to slowly-varying somatic inputs while implementing a low-pass filter for distal dendritic inputs. Our point neuron model extension is straightforward to implement and is computationally much more efficient compared to the original BS model. It is well suited for studying the dynamics of large populations of neurons with heterogeneous dendritic morphology with (and without) the influence of weak external electric fields. PMID:27893786

  8. Effect of Welding Heat Input on Microstructure and Texture of Inconel 625 Weld Overlay Studied Using the Electron Backscatter Diffraction Method

    NASA Astrophysics Data System (ADS)

    Kim, Joon-Suk; Lee, Hae-Woo

    2016-12-01

    The grain size and the texture of three specimens prepared at different heat inputs were determined using optical microscopy and the electron backscatter diffraction method of scanning electron microscopy. Each specimen was equally divided into fusion line zone (FLZ), columnar dendrite zone (CDZ), and surface zone (SZ), according to the location of the weld. Fine dendrites were observed in the FLZ, coarse dendrites in the CDZ, and dendrites grew perpendicular to the FLZ and CDZ. As the heat input increased, the melted zone in the vicinity of the FLZ widened due to the higher Fe content. A lower image quality value was observed for the FLZ compared to the other zones. The results of grain size measurement in each zone showed that the grain size of the SZ became larger as the heat input increased. From the inverse pole figure (IPF) map in the normal direction (ND) and the rolling direction (RD), as the heat input increased, a specific orientation was formed. However, a dominant [001] direction was observed in the RD IPF map.

  9. Linking physics with physiology in TMS: a sphere field model to determine the cortical stimulation site in TMS.

    PubMed

    Thielscher, Axel; Kammer, Thomas

    2002-11-01

    A fundamental problem of transcranial magnetic stimulation (TMS) is determining the site and size of the stimulated cortical area. In the motor system, the most common procedure for this is motor mapping. The obtained two-dimensional distribution of coil positions with associated muscle responses is used to calculate a center of gravity on the skull. However, even in motor mapping the exact stimulation site on the cortex is not known and only rough estimates of its size are possible. We report a new method which combines physiological measurements with a physical model used to predict the electric field induced by the TMS coil. In four subjects motor responses in a small hand muscle were mapped with 9-13 stimulation sites at the head perpendicular to the central sulcus in order to keep the induced current direction constant in a given cortical region of interest. Input-output functions from these head locations were used to determine stimulator intensities that elicit half-maximal muscle responses. Based on these stimulator intensities the field distribution on the individual cortical surface was calculated as rendered from anatomical MR data. The region on the cortical surface in which the different stimulation sites produced the same electric field strength (minimal variance, 4.2 +/- 0.8%.) was determined as the most likely stimulation site on the cortex. In all subjects, it was located at the lateral part of the hand knob in the motor cortex. Comparisons of model calculations with the solutions obtained in this manner reveal that the stimulated cortex area innervating the target muscle is substantially smaller than the size of the electric field induced by the coil. Our results help to resolve fundamental questions raised by motor mapping studies as well as motor threshold measurements.

  10. Rapid and semi-analytical design and simulation of a toroidal magnet made with YBCO and MgB 2 superconductors

    DOE PAGES

    Dimitrov, I. K.; Zhang, X.; Solovyov, V. F.; ...

    2015-07-07

    Recent advances in second-generation (YBCO) high-temperature superconducting wire could potentially enable the design of super high performance energy storage devices that combine the high energy density of chemical storage with the high power of superconducting magnetic storage. However, the high aspect ratio and the considerable filament size of these wires require the concomitant development of dedicated optimization methods that account for the critical current density in type-II superconductors. In this study, we report on the novel application and results of a CPU-efficient semianalytical computer code based on the Radia 3-D magnetostatics software package. Our algorithm is used to simulate andmore » optimize the energy density of a superconducting magnetic energy storage device model, based on design constraints, such as overall size and number of coils. The rapid performance of the code is pivoted on analytical calculations of the magnetic field based on an efficient implementation of the Biot-Savart law for a large variety of 3-D “base” geometries in the Radia package. The significantly reduced CPU time and simple data input in conjunction with the consideration of realistic input variables, such as material-specific, temperature, and magnetic-field-dependent critical current densities, have enabled the Radia-based algorithm to outperform finite-element approaches in CPU time at the same accuracy levels. Comparative simulations of MgB 2 and YBCO-based devices are performed at 4.2 K, in order to ascertain the realistic efficiency of the design configurations.« less

  11. Solid-state 13C NMR experiments reveal effects of aggregate size on the chemical composition of particulate organic matter in grazed steppe soils

    NASA Astrophysics Data System (ADS)

    Steffens, M.; Kölbl, A.; Kögel-Knabner, I.

    2009-04-01

    Grazing is one of the most important factors that may reduce soil organic matter (SOM) stocks and subsequently deteriorate aggregate stability in grassland topsoils. Land use management and grazing reduction are assumed to increase the input of OM, improve the soil aggregation and change species composition of vegetation (changes depth of OM input). Many studies have evaluated the impact of grazing cessation on SOM quantity. But until today little is known about the impact of grazing cessation on the chemical quality of SOM in density fractions, aggregate size classes and different horizons. The central aim of this study was to analyse the quality of SOM fractions in differently sized aggregates and horizons as affected by increased inputs of organic matter due to grazing exclusion. We applied a combined aggregate size, density and particle size fractionation procedure to sandy steppe topsoils with different organic matter inputs due to different grazing intensities (continuously grazed = Cg, winter grazing = Wg, ungrazed since 1999 = Ug99, ungrazed since 1979 = Ug79). Three different particulate organic matter (POM; free POM, in aggregate occluded POM and small in aggregate occluded POM) and seven mineral-associated organic matter fractions were separated for each of three aggregate size classes (coarse = 2000-6300 m, medium = 630-2000 m and fine =

  12. Connections of cat auditory cortex: III. Corticocortical system.

    PubMed

    Lee, Charles C; Winer, Jeffery A

    2008-04-20

    The mammalian auditory cortex (AC) is essential for computing the source and decoding the information contained in sound. Knowledge of AC corticocortical connections is modest other than in the primary auditory regions, nor is there an anatomical framework in the cat for understanding the patterns of connections among the many auditory areas. To address this issue we investigated cat AC connectivity in 13 auditory regions. Retrograde tracers were injected in the same area or in different areas to reveal the areal and laminar sources of convergent input to each region. Architectonic borders were established in Nissl and SMI-32 immunostained material. We assessed the topography, convergence, and divergence of the labeling. Intrinsic input constituted >50% of the projection cells in each area, and extrinsic inputs were strongest from functionally related areas. Each area received significant convergent ipsilateral input from several fields (5 to 8; mean 6). These varied in their laminar origin and projection density. Major extrinsic projections were preferentially from areas of the same functional type (tonotopic to tonotopic, nontonotopic to nontonotopic, limbic-related to limbic-related, multisensory-to-multisensory), while smaller projections link areas belonging to different groups. Branched projections between areas were <2% with deposits of two tracers in an area or in different areas. All extrinsic projections to each area were highly and equally topographic and clustered. Intrinsic input arose from all layers except layer I, and extrinsic input had unique, area-specific infragranular and supragranular origins. The many areal and laminar sources of input may contribute to the complexity of physiological responses in AC and suggest that many projections of modest size converge within each area rather than a simpler area-to-area serial or hierarchical pattern of corticocortical connectivity. (c) 2008 Wiley-Liss, Inc.

  13. A Dielectric Rod Antenna for Picosecond Pulse Stimulation of Neurological Tissue

    PubMed Central

    Petrella, Ross A.; Schoenbach, Karl H.; Xiao, Shu

    2016-01-01

    A dielectrically loaded wideband rod antenna has been studied as a pulse delivery system to subcutaneous tissues. Simulation results applying 100 ps electrical pulse show that it allows us to generate critical electric field for biological effects, such as brain stimulation, in the range of several centimeters. In order to reach the critical electric field for biological effects, which is approximately 20 kV/cm, at a depth of 2 cm, the input voltage needs to be 175 kV. The electric field spot size in the brain at this position is approximately 1 cm2. Experimental studies in free space with a conical antenna (part of the antenna system) with aluminum nitride as the dielectric have confirmed the accuracy of the simulation. These results set the foundation for high voltage in situ experiments on the complete antenna system and the delivery of pulses to biological tissue. PMID:27563160

  14. Transport and retention of multi-walled carbon nanotubes in saturated porous media: Effects of input concentration and grain size

    USDA-ARS?s Scientific Manuscript database

    Water-saturated column experiments were conducted to investigate the effect of input concentration (Co) and sand grain size on the transport and retention of low concentrations (1, 0.01, and 0.005 mg L/1) of functionalized 14C-labeled multi-walled carbon nanotubes (MWCNT) under repulsive electrostat...

  15. Convolutional neural network for road extraction

    NASA Astrophysics Data System (ADS)

    Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong

    2017-11-01

    In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.

  16. Input Range Testing for the General Mission Analysis Tool (GMAT)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.

    2007-01-01

    This document contains a test plan for testing input values to the General Mission Analysis Tool (GMAT). The plan includes four primary types of information, which rigorously define all tests that should be performed to validate that GMAT will accept allowable inputs and deny disallowed inputs. The first is a complete list of all allowed object fields in GMAT. The second type of information, is test input to be attempted for each field. The third type of information is allowable input values for all objects fields in GMAT. The final piece of information is how GMAT should respond to both valid and invalid information. It is VERY important to note that the tests below must be performed for both the Graphical User Interface and the script!! The examples are illustrated using a scripting perspective, because it is simpler to write up. However, the test must be performed for both interfaces to GMAT.

  17. The biological function of consciousness

    PubMed Central

    Earl, Brian

    2014-01-01

    This research is an investigation of whether consciousness—one's ongoing experience—influences one's behavior and, if so, how. Analysis of the components, structure, properties, and temporal sequences of consciousness has established that, (1) contrary to one's intuitive understanding, consciousness does not have an active, executive role in determining behavior; (2) consciousness does have a biological function; and (3) consciousness is solely information in various forms. Consciousness is associated with a flexible response mechanism (FRM) for decision-making, planning, and generally responding in nonautomatic ways. The FRM generates responses by manipulating information and, to function effectively, its data input must be restricted to task-relevant information. The properties of consciousness correspond to the various input requirements of the FRM; and when important information is missing from consciousness, functions of the FRM are adversely affected; both of which indicate that consciousness is the input data to the FRM. Qualitative and quantitative information (shape, size, location, etc.) are incorporated into the input data by a qualia array of colors, sounds, and so on, which makes the input conscious. This view of the biological function of consciousness provides an explanation why we have experiences; why we have emotional and other feelings, and why their loss is associated with poor decision-making; why blindsight patients do not spontaneously initiate responses to events in their blind field; why counter-habitual actions are only possible when the intended action is in mind; and the reason for inattentional blindness. PMID:25140159

  18. Energy Input Flux in the Global Quiet-Sun Corona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mac Cormack, Cecilia; Vásquez, Alberto M.; López Fuentes, Marcelo

    We present first results of a novel technique that provides, for the first time, constraints on the energy input flux at the coronal base ( r ∼ 1.025 R {sub ⊙}) of the quiet Sun at a global scale. By combining differential emission measure tomography of EUV images, with global models of the coronal magnetic field, we estimate the energy input flux at the coronal base that is required to maintain thermodynamically stable structures. The technique is described in detail and first applied to data provided by the Extreme Ultraviolet Imager instrument, on board the Solar TErrestrial RElations Observatory mission,more » and the Atmospheric Imaging Assembly instrument, on board the Solar Dynamics Observatory mission, for two solar rotations with different levels of activity. Our analysis indicates that the typical energy input flux at the coronal base of magnetic loops in the quiet Sun is in the range ∼0.5–2.0 × 10{sup 5} (erg s{sup −1} cm{sup −2}), depending on the structure size and level of activity. A large fraction of this energy input, or even its totality, could be accounted for by Alfvén waves, as shown by recent independent observational estimates derived from determinations of the non-thermal broadening of spectral lines in the coronal base of quiet-Sun regions. This new tomography product will be useful for the validation of coronal heating models in magnetohydrodinamic simulations of the global corona.« less

  19. Tungsten Carbide Grain Size Computation for WC-Co Dissimilar Welds

    NASA Astrophysics Data System (ADS)

    Zhou, Dongran; Cui, Haichao; Xu, Peiquan; Lu, Fenggui

    2016-06-01

    A "two-step" image processing method based on electron backscatter diffraction in scanning electron microscopy was used to compute the tungsten carbide (WC) grain size distribution for tungsten inert gas (TIG) welds and laser welds. Twenty-four images were collected on randomly set fields per sample located at the top, middle, and bottom of a cross-sectional micrograph. Each field contained 500 to 1500 WC grains. The images were recognized through clustering-based image segmentation and WC grain growth recognition. According to the WC grain size computation and experiments, a simple WC-WC interaction model was developed to explain the WC dissolution, grain growth, and aggregation in welded joints. The WC-WC interaction and blunt corners were characterized using scanning and transmission electron microscopy. The WC grain size distribution and the effects of heat input E on grain size distribution for the laser samples were discussed. The results indicate that (1) the grain size distribution follows a Gaussian distribution. Grain sizes at the top of the weld were larger than those near the middle and weld root because of power attenuation. (2) Significant WC grain growth occurred during welding as observed in the as-welded micrographs. The average grain size was 11.47 μm in the TIG samples, which was much larger than that in base metal 1 (BM1 2.13 μm). The grain size distribution curves for the TIG samples revealed a broad particle size distribution without fine grains. The average grain size (1.59 μm) in laser samples was larger than that in base metal 2 (BM2 1.01 μm). (3) WC-WC interaction exhibited complex plane, edge, and blunt corner characteristics during grain growth. A WC ( { 1 {bar{{1}}}00} ) to WC ( {0 1 1 {bar{{0}}}} ) edge disappeared and became a blunt plane WC ( { 10 1 {bar{{0}}}} ) , several grains with two- or three-sided planes and edges disappeared into a multi-edge, and a WC-WC merged.

  20. Impacts of field of view configuration of Cross-track Infrared Sounder on clear-sky observations.

    PubMed

    Wang, Likun; Chen, Yong; Han, Yong

    2016-09-01

    Hyperspectral infrared radiance measurements from satellite sensors contain valuable information on atmospheric temperature and humidity profiles and greenhouse gases, and therefore are directly assimilated into numerical weather prediction (NWP) models as inputs for weather forecasting. However, data assimilations in current operational NWP models still mainly rely on cloud-free observations due to the challenge of simulating cloud-contaminated radiances when using hyperspectral radiances. The limited spatial coverage of the 3×3 field of views (FOVs) in one field of regard (FOR) (i.e., spatial gap among FOVs) as well as relatively large footprint size (14 km) in current Cross-track Infrared Sounder (CrIS) instruments limits the amount of clear-sky observations. This study explores the potential impacts of future CrIS FOV configuration (including FOV size and spatial coverage) on the amount of clear-sky observations by simulation experiments. The radiance measurements and cloud mask products (VCM) from the Visible Infrared Imager Radiometer Suite (VIIRS) are used to simulate CrIS clear-sky observation under different FOV configurations. The results indicate that, given the same FOV coverage (e.g., 3×3), the percentage of clear-sky FOVs and the percentage of clear-sky FORs (that contain at least one clear-sky FOV) both increase as the FOV size decreases. In particular, if the CrIS FOV size were reduced from 14 km to 7 km, the percentage of clear-sky FOVs increases from 9.02% to 13.51% and the percentage of clear-sky FORs increases from 18.24% to 27.51%. Given the same FOV size but with increasing FOV coverage in each FOR, the clear-sky FOV observations increases proportionally with the increasing sampling FOVs. Both reducing FOV size and increasing FOV coverage can result in more clear-sky FORs, which benefit data utilization of NWP data assimilation.

  1. The Cold Land Processes Experiment (CLPX-1): Analysis and Modelling of LSOS Data (IOP3 Period)

    NASA Technical Reports Server (NTRS)

    Tedesco, Marco; Kim, Edward J.; Cline, Don; Graf, Tobias; Koike, Toshio; Hardy, Janet; Armstrong, Richard; Brodzik, Mary

    2004-01-01

    Microwave brightness temperatures at 18.7,36.5, and 89 GHz collected at the Local-Scale Observation Site (LSOS) of the NASA Cold-Land Processes Field Experiment in February, 2003 (third Intensive Observation Period) were simulated using a Dense Media Radiative Transfer model (DMRT), based on the Quasi Crystalline Approximation with Coherent Potential (QCA-CP). Inputs to the model were averaged from LSOS snow pit measurements, although different averages were used for the lower frequencies vs. the highest one, due to the different penetration depths and to the stratigraphy of the snowpack. Mean snow particle radius was computed as a best-fit parameter. Results show that the model was able to reproduce satisfactorily brightness temperatures measured by the University of Tokyo s Ground Based Microwave Radiometer system (CBMR-7). The values of the best-fit snow particle radii were found to fall within the range of values obtained by averaging the field-measured mean particle sizes for the three classes of Small, Medium and Large grain sizes measured at the LSOS site.

  2. Investigation of photoconductivity of individual InAs/GaAs(001) quantum dots by Scanning Near-field Optical Microscopy

    NASA Astrophysics Data System (ADS)

    Filatov, D. O.; Kazantseva, I. A.; Baidus', N. V.; Gorshkov, A. P.; Mishkin, V. P.

    2017-10-01

    The spatial distribution of the photocurrent in the input window plane of a GaAs-based p-i-n photodiode with embedded self-assembled InAs quantum dots (QDs) has been studied with the photoexcitation through a Scanning Near-field Optical Microscope (SNOM) probe at the emission wavelength greater than the intrinsic absorption edge of the host material (GaAs). The inhomogeneities related to the interband absorption in the individual InAs/GaAs(001) QDs have been observed in the photocurrent SNOM images. Thus, the possibility of imaging the individual InAs/GaAs(001) QDs in the photocurrent SNOM images with the lateral spatial resolution ˜ 100 nm (of the same order of magnitude as the SNOM probe aperture size) has been demonstrated.

  3. Suspicious activity recognition in infrared imagery using Hidden Conditional Random Fields for outdoor perimeter surveillance

    NASA Astrophysics Data System (ADS)

    Rogotis, Savvas; Ioannidis, Dimosthenis; Tzovaras, Dimitrios; Likothanassis, Spiros

    2015-04-01

    The aim of this work is to present a novel approach for automatic recognition of suspicious activities in outdoor perimeter surveillance systems based on infrared video processing. Through the combination of size, speed and appearance based features, like the Center-Symmetric Local Binary Patterns, short-term actions are identified and serve as input, along with user location, for modeling target activities using the theory of Hidden Conditional Random Fields. HCRFs are used to directly link a set of observations to the most appropriate activity label and as such to discriminate high risk activities (e.g. trespassing) from zero risk activities (e.g loitering outside the perimeter). Experimental results demonstrate the effectiveness of our approach in identifying suspicious activities for video surveillance systems.

  4. Observing gamma-ray bursts with the INTEGRAL spectrometer SPI

    NASA Technical Reports Server (NTRS)

    Skinner, G. K.; Connell, P. H.; Naya, J. E.; Seifert, H.; Teegarden, B. J.

    1997-01-01

    The spectrometer for INTEGRAL (SPI) is a germanium spectrometer with a wide field of view and will provide the International Gamma Ray Astrophysics Laboratory (INTEGRAL) mission with the opportunity of studying gamma ray bursts. Simulations carried out to assess the response of the instrument using data from real burst data as input are reported on. It is shown that, despite the angular resolution of 3 deg, it is possible to locate the direction of bursts with an accuracy of a few arcmin, while offering the high spectral resolution of the germanium detectors. It is remarked that the SPI field of view is similar to the size of the halo of bursts expected around M 31 on galactic models. The detectability of bursts with such a halo is discussed.

  5. Instrument development and field application of the in situ pH Calibrator at the Ocean Observatory

    NASA Astrophysics Data System (ADS)

    Tan, C.; Ding, K.; Seyfried, W. E.

    2012-12-01

    A novel, self-calibrating instrument for in-situ measurement of pH in deep sea environments up to 4000 m has recently been developed. The device utilizes a compact fluid delivery system to perform measurement and two-point calibration of the solid state pH sensor array (Ir|IrOx| Ag|AgCl), which is sealed in a flow cell to enhance response time. The fluid delivery system is composed of a metering pump and valves, which periodically deliver seawater samples into the flow cell to perform measurements. Similarly, pH buffer solutions can be delivered into the flow cell to calibrate the electrodes under operational conditions. Sensor signals are acquired and processed by a high resolution (0.25 mV) datalogger circuit with a size of 114 mm×31 mm×25 mm. Eight input channels are available: two high impedance sensor input channels, two low impedance sensor input channel, two thermocouple input channels and two thermistor input channels. These eight channels provide adequate measurement flexibility to enhance applications in deep sea environments. The two high impedance channels of the datalogger are especially designed with the input impedance of 1016 Ω for YSZ (yittria-stabilized zirconia) ceramic electrodes characterized by the extremely low input bias current and high resistance. Field tests have been performed in 2008 by ROV at the depth up to 3200 m. Using the continuous power supply and TCP/IP network capability of the Monterey Accelerated Research System (MARS) ocean observatory, the so-called "pH Calibrator" has the capability of long term operation up to six months. In the observatory mode, the electronics are configured with DC-DC power converter modules and Ethernet to serial module to gain access to the science port of seafloor junction box. The pH Calibrator will be deployed at the ocean observatory in October and the in situ data will be on line on the internet. The pH Calibrator presents real time pH data at high pressures and variable temperatures, while the in situ calibration capability enhances the accuracy of electrochemical measurements of seawater pH, fulfilling the need for long term objectives for marine studies.

  6. The effect of particle size on the heat affected zone during laser cladding of Ni-Cr-Si-B alloy on C45 carbon steel

    NASA Astrophysics Data System (ADS)

    Tanigawa, Daichi; Abe, Nobuyuki; Tsukamoto, Masahiro; Hayashi, Yoshihiko; Yamazaki, Hiroyuki; Tatsumi, Yoshihiro; Yoneyama, Mikio

    2018-02-01

    Laser cladding is one of the most useful surface coating methods for improving the wear and corrosion resistance of material surfaces. Although the heat input associated with laser cladding is small, a heat affected zone (HAZ) is still generated within the substrate because this is a thermal process. In order to reduce the area of the HAZ, the heat input must therefore be reduced. In the present study, we examined the effects of the powdered raw material particle size on the heat input and the extent of the HAZ during powder bed laser cladding. Ni-Cr-Si-B alloy layers were produced on C45 carbon steel substrates in conjunction with alloy powders having average particle sizes of 30, 40 and 55 μm, while measuring the HAZ area by optical microscopy. The heat input required for layer formation was found to decrease as smaller particles were used, such that the HAZ area was also reduced.

  7. Electron and donor-impurity-related Raman scattering and Raman gain in triangular quantum dots under an applied electric field

    NASA Astrophysics Data System (ADS)

    Tiutiunnyk, Anton; Akimov, Volodymyr; Tulupenko, Viktor; Mora-Ramos, Miguel E.; Kasapoglu, Esin; Morales, Alvaro L.; Duque, Carlos Alberto

    2016-04-01

    The differential cross-section of electron Raman scattering and the Raman gain are calculated and analysed in the case of prismatic quantum dots with equilateral triangle base shape. The study takes into account their dependencies on the size of the triangle, the influence of externally applied electric field as well as the presence of an ionized donor center located at the triangle's orthocenter. The calculations are made within the effective mass and parabolic band approximations, with a diagonalization scheme being applied to obtain the eigenfunctions and eigenvalues of the x- y Hamiltonian. The incident and secondary (scattered) radiation have been considered linearly-polarized along the y-direction, coinciding with the direction of the applied electric field. For the case with an impurity center, Raman scattering with the intermediate state energy below the initial state one has been found to show maximum differential cross-section more than by an order of magnitude bigger than that resulting from the scheme with lower intermediate state energy. The Raman gain has maximum magnitude around 35 nm dot size and electric field of 40 kV/cm for the case without impurity and at maximum considered values of the input parameters for the case with impurity. Values of Raman gain of the order of up to 104cm-1 are predicted in both cases.

  8. Seismic Travel Time Tomography in Modeling Low Velocity Anomalies between the Boreholes

    NASA Astrophysics Data System (ADS)

    Octova, A.; Sule, R.

    2018-04-01

    Travel time cross-hole seismic tomography is applied to describing the structure of the subsurface. The sources are placed at one borehole and some receivers are placed in the others. First arrival travel time data that received by each receiver is used as the input data in seismic tomography method. This research is devided into three steps. The first step is reconstructing the synthetic model based on field parameters. Field parameters are divided into 24 receivers and 45 receivers. The second step is applying inversion process for the field data that consists of five pairs bore holes. The last step is testing quality of tomogram with resolution test. Data processing using FAST software produces an explicit shape and resemble the initial model reconstruction of synthetic model with 45 receivers. The tomography processing in field data indicates cavities in several place between the bore holes. Cavities are identified on BH2A-BH1, BH4A-BH2A and BH4A-BH5 with elongated and rounded structure. In resolution tests using a checker-board, anomalies still can be identified up to 2 meter x 2 meter size. Travel time cross-hole seismic tomography analysis proves this mothod is very good to describing subsurface structure and boundary layer. Size and anomalies position can be recognized and interpreted easily.

  9. Resilience to the contralateral visual field bias as a window into object representations

    PubMed Central

    Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.

    2016-01-01

    Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998

  10. The influence of high heat input and inclusions control for rare earth on welding in low alloy high strength steel

    NASA Astrophysics Data System (ADS)

    Chu, Rensheng; Mu, Shukun; Liu, Jingang; Li, Zhanjun

    2017-09-01

    In the current paper, it is analyzed for the influence of high heat input and inclusions control for rare earth on welding in low alloy high strength steel. It is observed for the structure for different heat input of the coarse-grained area. It is finest for the coarse grain with the high heat input of 200 kJ / cm and the coarse grain area with 400 kJ / cm is the largest. The performance with the heat input of 200 kJ / cm for -20 °C V-shaped notch oscillatory power is better than the heat input of 400 kJ / cm. The grain structure is the ferrite and bainite for different holding time. The grain structure for 5s holding time has a grain size of 82.9 μm with heat input of 200 kJ/cm and grain size of 97.9 μm for 10s holding time. For the inclusions for HSLA steel with adding rare earth, they are Al2O3-CaS inclusions in the Al2O3-CaS-CaO ternary phase diagram. At the same time, it can not be found for low melting calcium aluminate inclusions compared to the inclusions for the HSLA steel without rare earth. Most of the size for the inclusions is between 1 ~ 10μm. The overall grain structure is smaller and the welding performance is more excellent for adding rare earth.

  11. Impacts of upstream drought and water withdrawals on the health and survival of downstream estuarine oyster populations

    PubMed Central

    Petes, Laura E; Brown, Alicia J; Knight, Carley R

    2012-01-01

    Increases in the frequency, duration, and severity of regional drought pose major threats to the health and integrity of downstream ecosystems. During 2007–2008, the U.S. southeast experienced one of the most severe droughts on record. Drought and water withdrawals in the upstream watershed led to decreased freshwater input to Apalachicola Bay, Florida, an estuary that is home to a diversity of commercially and ecologically important organisms. This study applied a combination of laboratory experiments and field observations to investigate the effects of reduced freshwater input on Apalachicola oysters. Oysters suffered significant disease-related mortality under high-salinity, drought conditions, particularly during the warm summer months. Mortality was size-specific, with large oysters of commercially harvestable size being more susceptible than small oysters. A potential salinity threshold was revealed between 17 and 25 ppt, where small oysters began to suffer mortality, and large oysters exhibited an increase in mortality. These findings have important implications for watershed management, because upstream freshwater releases could be carefully timed and allocated during stressful periods of the summer to reduce disease-related oyster mortality. Integrated, forward-looking water management is needed, particularly under future scenarios of climate change and human population growth, to sustain the valuable ecosystem services on which humans depend. PMID:22957175

  12. Spatial variability of summer Florida precipitation and its impact on microwave radiometer rainfall-measurement systems

    NASA Technical Reports Server (NTRS)

    Turner, B. J.; Austin, G. L.

    1993-01-01

    Three-dimensional radar data for three summer Florida storms are used as input to a microwave radiative transfer model. The model simulates microwave brightness observations by a 19-GHz, nadir-pointing, satellite-borne microwave radiometer. The statistical distribution of rainfall rates for the storms studied, and therefore the optimal conversion between microwave brightness temperatures and rainfall rates, was found to be highly sensitive to the spatial resolution at which observations were made. The optimum relation between the two quantities was less sensitive to the details of the vertical profile of precipitation. Rainfall retrievals were made for a range of microwave sensor footprint sizes. From these simulations, spatial sampling-error estimates were made for microwave radiometers over a range of field-of-view sizes. The necessity of matching the spatial resolution of ground truth to radiometer footprint size is emphasized. A strategy for the combined use of raingages, ground-based radar, microwave, and visible-infrared (VIS-IR) satellite sensors is discussed.

  13. Gaussian beam profile shaping apparatus, method therefor and evaluation thereof

    DOEpatents

    Dickey, Fred M.; Holswade, Scott C.; Romero, Louis A.

    1999-01-01

    A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system.

  14. Gaussian beam profile shaping apparatus, method therefore and evaluation thereof

    DOEpatents

    Dickey, F.M.; Holswade, S.C.; Romero, L.A.

    1999-01-26

    A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system. 27 figs.

  15. Data compression of discrete sequence: A tree based approach using dynamic programming

    NASA Technical Reports Server (NTRS)

    Shivaram, Gurusrasad; Seetharaman, Guna; Rao, T. R. N.

    1994-01-01

    A dynamic programming based approach for data compression of a ID sequence is presented. The compression of an input sequence of size N to that of a smaller size k is achieved by dividing the input sequence into k subsequences and replacing the subsequences by their respective average values. The partitioning of the input sequence is carried with the intention of reducing the mean squared error in the reconstructed sequence. The complexity involved in finding the partitions which would result in such an optimal compressed sequence is reduced by using the dynamic programming approach, which is presented.

  16. An expert system based software sizing tool, phase 2

    NASA Technical Reports Server (NTRS)

    Friedlander, David

    1990-01-01

    A software tool was developed for predicting the size of a future computer program at an early stage in its development. The system is intended to enable a user who is not expert in Software Engineering to estimate software size in lines of source code with an accuracy similar to that of an expert, based on the program's functional specifications. The project was planned as a knowledge based system with a field prototype as the goal of Phase 2 and a commercial system planned for Phase 3. The researchers used techniques from Artificial Intelligence and knowledge from human experts and existing software from NASA's COSMIC database. They devised a classification scheme for the software specifications, and a small set of generic software components that represent complexity and apply to large classes of programs. The specifications are converted to generic components by a set of rules and the generic components are input to a nonlinear sizing function which makes the final prediction. The system developed for this project predicted code sizes from the database with a bias factor of 1.06 and a fluctuation factor of 1.77, an accuracy similar to that of human experts but without their significant optimistic bias.

  17. Anode power deposition in applied-field MPD thrusters

    NASA Technical Reports Server (NTRS)

    Myers, Roger M.; Soulas, George C.

    1992-01-01

    Anode power deposition is the principle performance limiter of magnetoplasmadynamic (MPD) thrusters. Current thrusters lose between 50 and 70 percent of the input power to the anode. In this work, anode power deposition was studied for three cylindrical applied magnetic field thrusters for a range of argon propellant flow rates, discharge currents, and applied-field strengths. Between 60 and 95 percent of the anode power deposition resulted from electron current conduction into the anode, with cathode radiation depositing between 5 and 35 percent of the anode power, and convective heat transfer from the hot plasma accounting for less than 5 percent. While the fractional anode power loss decreased with increasing applied-field strength and anode size, the magnitude of the anode power increased. The rise in anode power resulted from a linear rise in the anode fall voltage with applied-field strength and anode radius. The anode fall voltage also rose with decreasing propellant flow rate. The trends indicate that the anode fall region is magnetized, and suggest techniques for reducing the anode power loss in MPD thrusters.

  18. On the frequency response of a Wenglor particle-counting system for aeolian transport measurements

    NASA Astrophysics Data System (ADS)

    Bauer, Bernard O.; Davidson-Arnott, Robin G. D.; Hilton, Michael J.; Fraser, Douglas

    2018-06-01

    A commonly deployed particle-counting system for aeolian saltation flux, consisting of a Wenglor fork sensor and an Onset Hobo Pulse Input Adapter linked to an Onset Hobo Energy Logger Pro data logger, was tested for frequency response. The Wenglor fork sensor is an optical gate device that has very fast switching capacity that can accommodate the time of flight of saltating sand particles through the sensing volume with the exception of very fine sand or silt and very quickly moving particles. The Pulse Input Adapter, in contrast, imposes limitations on the frequency response of the system. The manufacturer of the pulse adapter specifies an upper limit of 120 Hz, although bench tests with an electronic pulse generator indicate that the frequency response of the Pulse Input Adapter, in isolation, is excellent up to 3000 Hz, with only small error (less than 1.6%) due to under-counting during data transfer intervals. A mechanical test of the integrated system (fork sensor, pulse input adapter, and data logger) demonstrates excellent performance up to about 700 Hz (less than 2% error), but significant under-counting above 1000 Hz for unknown reasons. This specific particle-counting system therefore has a frequency response that is well suited for investigation of the dynamics of aeolian saltation as typically encountered in most field conditions on coastal beaches with the exception of extreme wind events and very small particle sizes.

  19. Optimization of input parameters of acoustic-transfection for the intracellular delivery of macromolecules using FRET-based biosensors

    NASA Astrophysics Data System (ADS)

    Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.

    2016-03-01

    Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.

  20. Chromatic summation and receptive field properties of blue-on and blue-off cells in marmoset lateral geniculate nucleus.

    PubMed

    Eiber, C D; Pietersen, A N J; Zeater, N; Solomon, S G; Martin, P R

    2017-11-22

    The "blue-on" and "blue-off" receptive fields in retina and dorsal lateral geniculate nucleus (LGN) of diurnal primates combine signals from short-wavelength sensitive (S) cone photoreceptors with signals from medium/long wavelength sensitive (ML) photoreceptors. Three questions about this combination remain unresolved. Firstly, is the combination of S and ML signals in these cells linear or non-linear? Secondly, how does the timing of S and ML inputs to these cells influence their responses? Thirdly, is there spatial antagonism within S and ML subunits of the receptive field of these cells? We measured contrast sensitivity and spatial frequency tuning for four types of drifting sine gratings: S cone isolating, ML cone isolating, achromatic (S + ML), and counterphase chromatic (S - ML), in extracellular recordings from LGN of marmoset monkeys. We found that responses to stimuli which modulate both S and ML cones are well predicted by a linear sum of S and ML signals, followed by a saturating contrast-response relation. Differences in sensitivity and timing (i.e. vector combination) between S and ML inputs are needed to explain the amplitude and phase of responses to achromatic (S + ML) and counterphase chromatic (S - ML) stimuli. Best-fit spatial receptive fields for S and/or ML subunits in most cells (>80%) required antagonistic surrounds, usually in the S subunit. The surrounds were however generally weak and had little influence on spatial tuning. The sensitivity and size of S and ML subunits were correlated on a cell-by-cell basis, adding to evidence that blue-on and blue-off receptive fields are specialised to signal chromatic but not spatial contrast. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Effect of heat input on the microstructure, residual stresses and corrosion resistance of 304L austenitic stainless steel weldments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unnikrishnan, Rahul, E-mail: rahulunnikrishnannair@gmail.com; Idury, K.S.N. Satish, E-mail: satishidury@gmail.com; Ismail, T.P., E-mail: tpisma@gmail.com

    Austenitic stainless steels are widely used in high performance pressure vessels, nuclear, chemical, process and medical industry due to their very good corrosion resistance and superior mechanical properties. However, austenitic stainless steels are prone to sensitization when subjected to higher temperatures (673 K to 1173 K) during the manufacturing process (e.g. welding) and/or certain applications (e.g. pressure vessels). During sensitization, chromium in the matrix precipitates out as carbides and intermetallic compounds (sigma, chi and Laves phases) decreasing the corrosion resistance and mechanical properties. In the present investigation, 304L austenitic stainless steel was subjected to different heat inputs by shielded metalmore » arc welding process using a standard 308L electrode. The microstructural developments were characterized by using optical microscopy and electron backscattered diffraction, while the residual stresses were measured by X-ray diffraction using the sin{sup 2}ψ method. It was observed that even at the highest heat input, shielded metal arc welding process does not result in significant precipitation of carbides or intermetallic phases. The ferrite content and grain size increased with increase in heat input. The grain size variation in the fusion zone/heat affected zone was not effectively captured by optical microscopy. This study shows that electron backscattered diffraction is necessary to bring out changes in the grain size quantitatively in the fusion zone/heat affected zone as it can consider twin boundaries as a part of grain in the calculation of grain size. The residual stresses were compressive in nature for the lowest heat input, while they were tensile at the highest heat input near the weld bead. The significant feature of the welded region and the base metal was the presence of a very strong texture. The texture in the heat affected zone was almost random. - Highlights: • Effect of heat input on microstructure, residual stresses and corrosion is studied. • HAZ and width of dendrite in the welded region increase with heat input. • Residual stresses are tensile near the welded region after the highest heat input. • Welded region has the highest pit density after highest heat input. • Dendrites and δ-ferrite were highly oriented in the welded region.« less

  2. Using Geothermal Play Types as an Analogue for Estimating Potential Resource Size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry, Rachel; Young, Katherine

    Blind geothermal systems are becoming increasingly common as more geothermal fields are developed. Geothermal development is known to have high risk in the early stages of a project development because reservoir characteristics are relatively unknown until wells are drilled. Play types (or occurrence models) categorize potential geothermal fields into groups based on geologic characteristics. To aid in lowering exploration risk, these groups' reservoir characteristics can be used as analogues in new site exploration. The play type schemes used in this paper were Moeck and Beardsmore play types (Moeck et al. 2014) and Brophy occurrence models (Brophy et al. 2011). Operatingmore » geothermal fields throughout the world were classified based on their associated play type, and then reservoir characteristics data were catalogued. The distributions of these characteristics were plotted in histograms to develop probability density functions for each individual characteristic. The probability density functions can be used as input analogues in Monte Carlo estimations of resource potential for similar play types in early exploration phases. A spreadsheet model was created to estimate resource potential in undeveloped fields. The user can choose to input their own values for each reservoir characteristic or choose to use the probability distribution functions provided from the selected play type. This paper also addresses the United States Geological Survey's 1978 and 2008 assessment of geothermal resources by comparing their estimated values to reported values from post-site development. Information from the collected data was used in the comparison for thirty developed sites in the United States. No significant trends or suggestions for methodologies could be made by the comparison.« less

  3. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  4. Density-dependent microbial turnover improves soil carbon model predictions of long-term litter manipulations

    NASA Astrophysics Data System (ADS)

    Georgiou, Katerina; Abramoff, Rose; Harte, John; Riley, William; Torn, Margaret

    2017-04-01

    Climatic, atmospheric, and land-use changes all have the potential to alter soil microbial activity via abiotic effects on soil or mediated by changes in plant inputs. Recently, many promising microbial models of soil organic carbon (SOC) decomposition have been proposed to advance understanding and prediction of climate and carbon (C) feedbacks. Most of these models, however, exhibit unrealistic oscillatory behavior and SOC insensitivity to long-term changes in C inputs. Here we diagnose the sources of instability in four models that span the range of complexity of these recent microbial models, by sequentially adding complexity to a simple model to include microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We propose a formulation that introduces density-dependence of microbial turnover, which acts to limit population sizes and reduce oscillations. We compare these models to results from 24 long-term C-input field manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that widely used first-order models and microbial models without density-dependence cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures. The proposed formulation improves predictions of long-term C-input changes, and implies greater SOC storage associated with CO2-fertilization-driven increases in C inputs over the coming century compared to common microbial models. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in Earth System Models.

  5. CHARACTERISTIC LENGTH SCALE OF INPUT DATA IN DISTRIBUTED MODELS: IMPLICATIONS FOR MODELING GRID SIZE. (R824784)

    EPA Science Inventory

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model resp...

  6. A novel method for patient exit and entrance dose prediction based on water equivalent path length measured with an amorphous silicon electronic portal imaging device.

    PubMed

    Kavuma, Awusi; Glegg, Martin; Metwaly, Mohamed; Currie, Garry; Elliott, Alex

    2010-01-21

    In vivo dosimetry is one of the quality assurance tools used in radiotherapy to monitor the dose delivered to the patient. Electronic portal imaging device (EPID) images for a set of solid water phantoms of varying thicknesses were acquired and the data fitted onto a quadratic equation, which relates the reduction in photon beam intensity to the attenuation coefficient and material thickness at a reference condition. The quadratic model is used to convert the measured grey scale value into water equivalent path length (EPL) at each pixel for any material imaged by the detector. For any other non-reference conditions, scatter, field size and MU variation effects on the image were corrected by relative measurements using an ionization chamber and an EPID. The 2D EPL is linked to the percentage exit dose table, for different thicknesses and field sizes, thereby converting the plane pixel values at each point into a 2D dose map. The off-axis ratio is corrected using envelope and boundary profiles generated from the treatment planning system (TPS). The method requires field size, monitor unit and source-to-surface distance (SSD) as clinical input parameters to predict the exit dose, which is then used to determine the entrance dose. The measured pixel dose maps were compared with calculated doses from TPS for both entrance and exit depth of phantom. The gamma index at 3% dose difference (DD) and 3 mm distance to agreement (DTA) resulted in an average of 97% passing for the square fields of 5, 10, 15 and 20 cm. The exit dose EPID dose distributions predicted by the algorithm were in better agreement with TPS-calculated doses than phantom entrance dose distributions.

  7. A novel method for patient exit and entrance dose prediction based on water equivalent path length measured with an amorphous silicon electronic portal imaging device

    NASA Astrophysics Data System (ADS)

    Kavuma, Awusi; Glegg, Martin; Metwaly, Mohamed; Currie, Garry; Elliott, Alex

    2010-01-01

    In vivo dosimetry is one of the quality assurance tools used in radiotherapy to monitor the dose delivered to the patient. Electronic portal imaging device (EPID) images for a set of solid water phantoms of varying thicknesses were acquired and the data fitted onto a quadratic equation, which relates the reduction in photon beam intensity to the attenuation coefficient and material thickness at a reference condition. The quadratic model is used to convert the measured grey scale value into water equivalent path length (EPL) at each pixel for any material imaged by the detector. For any other non-reference conditions, scatter, field size and MU variation effects on the image were corrected by relative measurements using an ionization chamber and an EPID. The 2D EPL is linked to the percentage exit dose table, for different thicknesses and field sizes, thereby converting the plane pixel values at each point into a 2D dose map. The off-axis ratio is corrected using envelope and boundary profiles generated from the treatment planning system (TPS). The method requires field size, monitor unit and source-to-surface distance (SSD) as clinical input parameters to predict the exit dose, which is then used to determine the entrance dose. The measured pixel dose maps were compared with calculated doses from TPS for both entrance and exit depth of phantom. The gamma index at 3% dose difference (DD) and 3 mm distance to agreement (DTA) resulted in an average of 97% passing for the square fields of 5, 10, 15 and 20 cm. The exit dose EPID dose distributions predicted by the algorithm were in better agreement with TPS-calculated doses than phantom entrance dose distributions.

  8. Silicon Carbide (SiC) Device and Module Reliability, Performance of a Loop Heat Pipe Subjected to a Phase-Coupled Heat Input to an Acceleration Field

    DTIC Science & Technology

    2016-05-01

    AFRL-RQ-WP-TR-2016-0108 SILICON CARBIDE (SiC) DEVICE AND MODULE RELIABILITY Performance of a Loop Heat Pipe Subjected to a Phase-Coupled... Heat Input to an Acceleration Field Kirk L. Yerkes (AFRL/RQQI) and James D. Scofield (AFRL/RQQE) Flight Systems Integration Branch (AFRL/RQQI...CARBIDE (SiC) DEVICE AND MODULE RELIABILITY Performance of a Loop Heat Pipe Subjected to a Phase-Coupled Heat Input to an Acceleration Field 5a

  9. Characterizing the Meso-scale Plasma Flows in Earth's Coupled Magnetosphere-Ionosphere-Thermosphere System

    NASA Astrophysics Data System (ADS)

    Gabrielse, C.; Nishimura, T.; Lyons, L. R.; Gallardo-Lacourt, B.; Deng, Y.; McWilliams, K. A.; Ruohoniemi, J. M.

    2017-12-01

    NASA's Heliophysics Decadal Survey put forth several imperative, Key Science Goals. The second goal communicates the urgent need to "Determine the dynamics and coupling of Earth's magnetosphere, ionosphere, and atmosphere and their response to solar and terrestrial inputs...over a range of spatial and temporal scales." Sun-Earth connections (called Space Weather) have strong societal impacts because extreme events can disturb radio communications and satellite operations. The field's current modeling capabilities of such Space Weather phenomena include large-scale, global responses of the Earth's upper atmosphere to various inputs from the Sun, but the meso-scale ( 50-500 km) structures that are much more dynamic and powerful in the coupled system remain uncharacterized. Their influences are thus far poorly understood. We aim to quantify such structures, particularly auroral flows and streamers, in order to create an empirical model of their size, location, speed, and orientation based on activity level (AL index), season, solar cycle (F10.7), interplanetary magnetic field (IMF) inputs, etc. We present a statistical study of meso-scale flow channels in the nightside auroral oval and polar cap using SuperDARN. These results are used to inform global models such as the Global Ionosphere Thermosphere Model (GITM) in order to evaluate the role of meso-scale disturbances on the fully coupled magnetosphere-ionosphere-thermosphere system. Measuring the ionospheric footpoint of magnetospheric fast flows, our analysis technique from the ground also provides a 2D picture of flows and their characteristics during different activity levels that spacecraft alone cannot.

  10. Effect of Pin Length on Hook Size and Joint Properties in Friction Stir Lap Welding of 7B04 Aluminum Alloy

    NASA Astrophysics Data System (ADS)

    Wang, Min; Zhang, Huijie; Zhang, Jingbao; Zhang, Xiao; Yang, Lei

    2014-05-01

    Friction stir lap welding of 7B04 aluminum alloy was conducted in the present paper, and the effect of pin length on hook size and joint properties was investigated in detail. It is found that for each given set of process parameters, the size of hook defect on the advancing side shows an "M" type evolution trend as the pin length is increased. The affecting characteristics of pin length on joint properties are dependent on the heat input levels. When the heat input is low, the fracture strength is firstly increased to a peak value and then shows a decrease. When the heat input is relatively high, the evolution trend of fracture strength tends to exhibit a "W" type with increasing the pin length.

  11. Fold-change detection and scalar symmetry of sensory input fields.

    PubMed

    Shoval, Oren; Goentoro, Lea; Hart, Yuval; Mayo, Avi; Sontag, Eduardo; Alon, Uri

    2010-09-07

    Recent studies suggest that certain cellular sensory systems display fold-change detection (FCD): a response whose entire shape, including amplitude and duration, depends only on fold changes in input and not on absolute levels. Thus, a step change in input from, for example, level 1 to 2 gives precisely the same dynamical output as a step from level 2 to 4, because the steps have the same fold change. We ask what the benefit of FCD is and show that FCD is necessary and sufficient for sensory search to be independent of multiplying the input field by a scalar. Thus, the FCD search pattern depends only on the spatial profile of the input and not on its amplitude. Such scalar symmetry occurs in a wide range of sensory inputs, such as source strength multiplying diffusing/convecting chemical fields sensed in chemotaxis, ambient light multiplying the contrast field in vision, and protein concentrations multiplying the output in cellular signaling systems. Furthermore, we show that FCD entails two features found across sensory systems, exact adaptation and Weber's law, but that these two features are not sufficient for FCD. Finally, we present a wide class of mechanisms that have FCD, including certain nonlinear feedback and feed-forward loops. We find that bacterial chemotaxis displays feedback within the present class and hence, is expected to show FCD. This can explain experiments in which chemotaxis searches are insensitive to attractant source levels. This study, thus, suggests a connection between properties of biological sensory systems and scalar symmetry stemming from physical properties of their input fields.

  12. Piezoelectric particle accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kemp, Mark A.; Jongewaard, Erik N.; Haase, Andrew A.

    2017-08-29

    A particle accelerator is provided that includes a piezoelectric accelerator element, where the piezoelectric accelerator element includes a hollow cylindrical shape, and an input transducer, where the input transducer is disposed to provide an input signal to the piezoelectric accelerator element, where the input signal induces a mechanical excitation of the piezoelectric accelerator element, where the mechanical excitation is capable of generating a piezoelectric electric field proximal to an axis of the cylindrical shape, where the piezoelectric accelerator is configured to accelerate a charged particle longitudinally along the axis of the cylindrical shape according to the piezoelectric electric field.

  13. Entorhinal stellate cells show preferred spike phase-locking to theta inputs that is enhanced by correlations in synaptic activity

    PubMed Central

    Fernandez, Fernando R.; Malerba, Paola; Bressloff, Paul C.; White, John A.

    2013-01-01

    In active networks, excitatory and inhibitory synaptic inputs generate membrane voltage fluctuations that drive spike activity in a probabilistic manner. Despite this, some cells in vivo show a strong propensity to precisely lock to the local field potential and maintain a specific spike-phase relationship relative to other cells. In recordings from rat medial entorhinal cortical stellate cells, we measured spike phase-locking in response to sinusoidal “test” inputs in the presence of different forms of background membrane voltage fluctuations, generated via dynamic clamp. We find that stellate cells show strong and robust spike phase-locking to theta (4–12 Hz) inputs. This response occurs under a wide variety of background membrane voltage fluctuation conditions that include a substantial increase in overall membrane conductance. Furthermore, the IH current present in stellate cells is critical to the enhanced spike phase-locking response at theta. Finally, we show that correlations between inhibitory and excitatory conductance fluctuations, which can arise through feed-back and feed-forward inhibition, can substantially enhance the spike phase-locking response. The enhancement in locking is a result of a selective reduction in the size of low frequency membrane voltage fluctuations due to cancelation of inhibitory and excitatory current fluctuations with correlations. Hence, our results demonstrate that stellate cells have a strong preference for spike phase-locking to theta band inputs and that the absolute magnitude of locking to theta can be modulated by the properties of background membrane voltage fluctuations. PMID:23554484

  14. Recognizing suspicious activities in infrared imagery using appearance-based features and the theory of hidden conditional random fields for outdoor perimeter surveillance

    NASA Astrophysics Data System (ADS)

    Rogotis, Savvas; Palaskas, Christos; Ioannidis, Dimosthenis; Tzovaras, Dimitrios; Likothanassis, Spiros

    2015-11-01

    This work aims to present an extended framework for automatically recognizing suspicious activities in outdoor perimeter surveilling systems based on infrared video processing. By combining size-, speed-, and appearance-based features, like the local phase quantization and the histograms of oriented gradients, actions of small duration are recognized and used as input, along with spatial information, for modeling target activities using the theory of hidden conditional random fields (HCRFs). HCRFs are used to classify an observation sequence into the most appropriate activity label class, thus discriminating high-risk activities like trespassing from zero risk activities, such as loitering outside the perimeter. The effectiveness of this approach is demonstrated with experimental results in various scenarios that represent suspicious activities in perimeter surveillance systems.

  15. Dosimetry of a set-up for the exposure of newborn mice to 2.45-GHZ WiFi frequencies.

    PubMed

    Pinto, R; Lopresto, V; Galloni, P; Marino, C; Mancini, S; Lodato, R; Pioli, C; Lovisolo, G A

    2010-08-01

    This work describes the dosimetry of a two waveguide cell system designed to expose newborn mice to electromagnetic fields associated with wireless fidelity signals in the frequency band of 2.45 GHz. The dosimetric characterisation of the exposure system was performed both numerically and experimentally. Specific measures were adopted with regard to the increase in both weight and size of the biological target during the exposure period. The specific absorption rate (SAR, W kg(-1)) for 1 W of input power vs. weight curve was assessed. The curve evidenced an SAR pattern varying from <1 W kg(-1) to >6 W kg(-1) during the first 5 weeks of the life of mice, with a peak resonance phenomenon at a weight around 5 g. This curve was used to set the appropriate level of input power during experimental sessions to expose the growing mice to a defined and constant dose.

  16. Performance and Simulation of a Stand-alone Parabolic Trough Solar Thermal Power Plant

    NASA Astrophysics Data System (ADS)

    Mohammad, S. T.; Al-Kayiem, H. H.; Assadi, M. K.; Gilani, S. I. U. H.; Khlief, A. K.

    2018-05-01

    In this paper, a Simulink® Thermolib Model has been established for simulation performance evaluation of Stand-alone Parabolic Trough Solar Thermal Power Plant in Universiti Teknologi PETRONAS, Malaysia. This paper proposes a design of 1.2 kW parabolic trough power plant. The model is capable to predict temperatures at any system outlet in the plant, as well as the power output produced. The conditions that are taken into account as input to the model are: local solar radiation and ambient temperatures, which have been measured during the year. Other parameters that have been input to the model are the collector’s sizes, location in terms of latitude and altitude. Lastly, the results are presented in graphical manner to describe the analysed variations of various outputs of the solar fields obtained, and help to predict the performance of the plant. The developed model allows an initial evaluation of the viability and technical feasibility of any similar solar thermal power plant.

  17. Design and implementation of an audio indicator

    NASA Astrophysics Data System (ADS)

    Zheng, Shiyong; Li, Zhao; Li, Biqing

    2017-04-01

    This page proposed an audio indicator which designed by using C9014, LED by operational amplifier level indicator, the decimal count/distributor of CD4017. The experimental can control audibly neon and holiday lights through the signal. Input audio signal after C9014 composed of operational amplifier for power amplifier, the adjust potentiometer extraction amplification signal input voltage CD4017 distributors make its drive to count, then connect the LED display running situation of the circuit. This simple audio indicator just use only U1 and can produce two colors LED with the audio signal tandem come pursuit of the running effect, from LED display the running of the situation takes can understand the general audio signal. The variation in the audio and the frequency of the signal and the corresponding level size. In this light can achieve jump to change, slowly, atlas, lighting four forms, used in home, hotel, discos, theater, advertising and other fields, and a wide range of USES, rU1h life in a modern society.

  18. Modularity in the Organization of Mouse Primary Visual Cortex

    PubMed Central

    Ji, Weiqing; Gămănuţ, Răzvan; Bista, Pawan; D’Souza, Rinaldo D.; Wang, Quanxin; Burkhalter, Andreas

    2015-01-01

    SUMMARY Layer 1 (L1) of primary visual cortex (V1) is the target of projections from many brain regions outside of V1. We found that inputs to the non-columnar mouse V1 from the dorsal lateral geniculate nucleus and feedback projections from multiple higher cortical areas to L1 are patchy. The patches are matched to a pattern of M2 muscarinic acetylcholine receptor expression at fixed locations of mouse, rat and monkey V1. Neurons in L2/3 aligned with M2-rich patches have high spatial acuity whereas cells in M2-poor zones exhibited high temporal acuity. Together M2+ and M2− zones form constant-size domains that are repeated across V1. Domains map subregions of the receptive field, such that multiple copies are contained within the point image. The results suggest that the modular network in mouse V1 selects spatiotemporally distinct clusters of neurons within the point image for top-down control and differential routing of inputs to cortical streams. PMID:26247867

  19. Productive Vocabulary among Three Groups of Bilingual American Children: Comparison and Prediction

    PubMed Central

    Cote, Linda R.; Bornstein, Marc H.

    2015-01-01

    The importance of input factors for bilingual children’s vocabulary development was investigated. Forty-seven Argentine, 42 South Korean, 51 European American, 29 Latino immigrant, 26 Japanese immigrant, and 35 Korean immigrant mothers completed checklists of their 20-month-old children’s productive vocabularies. Bilingual children’s vocabulary sizes in each language separately were consistently smaller than their monolingual peers but only Latino bilingual children had smaller total vocabularies than monolingual children. Bilingual children’s vocabulary sizes were similar to each other. Maternal acculturation predicted the amount of input in each language, which then predicted children’s vocabulary size in each language. Maternal acculturation also predicted children’s English-language vocabulary size directly. PMID:25620820

  20. FPGA implementation of self organizing map with digital phase locked loops.

    PubMed

    Hikawa, Hiroomi

    2005-01-01

    The self-organizing map (SOM) has found applicability in a wide range of application areas. Recently new SOM hardware with phase modulated pulse signal and digital phase-locked loops (DPLLs) has been proposed (Hikawa, 2005). The system uses the DPLL as a computing element since the operation of the DPLL is very similar to that of SOM's computation. The system also uses square waveform phase to hold the value of the each input vector element. This paper discuss the hardware implementation of the DPLL SOM architecture. For effective hardware implementation, some components are redesigned to reduce the circuit size. The proposed SOM architecture is described in VHDL and implemented on field programmable gate array (FPGA). Its feasibility is verified by experiments. Results show that the proposed SOM implemented on the FPGA has a good quantization capability, and its circuit size very small.

  1. Electrical Counting and Sizing of Mammalian Cells in Suspension

    PubMed Central

    Gregg, E. C.; Steidley, K. David

    1965-01-01

    A recently developed method of determining the number and size of particles suspended in a conducting solution is to pump the suspension through a small orifice having an immersed electrode on each side to supply electrical current. The current changes due to the passage of particles of resistivity different from that of the solution. Theoretical expressions are developed which relate the current change caused by such particles to their volume and shape. It is found that most biological cells may be treated as dielectric particles whose capacitive effects are negligible. Electrolytic tank measurements on models confirm the theoretical development, and electric field plots of model orifices are used to predict the observed pulse shapes. An equivalent circuit of the orifice-electrode system is analyzed and shows that the current pulse may be made conductivity-independent when observed with a zero input impedance amplifier. PMID:5861698

  2. Manipulation of particles by weak forces

    NASA Technical Reports Server (NTRS)

    Adler, M. S.; Savkar, S. D.; Summerhayes, H. R.

    1972-01-01

    Quantitative relations between various force fields and their effects on the motion of particles of various sizes and physical characteristics were studied. The forces considered were those derived from light, heat, microwaves, electric interactions, magnetic interactions, particulate interactions, and sound. A physical understanding is given of the forces considered as well as formulae which express how the size of the force depends on the physical and electrical properties of the particle. The drift velocity in a viscous fluid is evaluated as a function of initial acceleration and the effects of thermal random motion are considered. A means of selectively sorting or moving particles by choosing a force system and/or environment such that the particle of interest reacts uniquely was developed. The forces considered and a demonstration of how the initial acceleration, drift velocity, and ultimate particle density distribution is affected by particle, input, and environmental parameters are tabulated.

  3. Results of a comprehensive atmospheric aerosol-radiation experiment in the southwestern United States. I - Size distribution, extinction optical depth and vertical profiles of aerosols suspended in the atmosphere. II - Radiation flux measurements and

    NASA Technical Reports Server (NTRS)

    Deluisi, J. J.; Furukawa, F. M.; Gillette, D. A.; Schuster, B. G.; Charlson, R. J.; Porch, W. M.; Fegley, R. W.; Herman, B. M.; Rabinoff, R. A.; Twitty, J. T.

    1976-01-01

    Results are reported for a field test that was aimed at acquiring a sufficient set of measurements of aerosol properties required as input for radiative-transfer calculations relevant to the earth's radiation balance. These measurements include aerosol extinction and size distributions, vertical profiles of aerosols, and radiation fluxes. Physically consistent, vertically inhomogeneous models of the aerosol characteristics of a turbid atmosphere over a desert and an agricultural region are constructed by using direct and indirect sampling techniques. These results are applied for a theoretical interpretation of airborne radiation-flux measurements. The absorption term of the complex refractive index of aerosols is estimated, a regional variation in the refractive index is noted, and the magnitude of solar-radiation absorption by aerosols and atmospheric molecules is determined.

  4. On the role of dimensionality and sample size for unstructured and structured covariance matrix estimation

    NASA Technical Reports Server (NTRS)

    Morgera, S. D.; Cooper, D. B.

    1976-01-01

    The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.

  5. FDTD-ANT User Manual

    NASA Technical Reports Server (NTRS)

    Zimmerman, Martin L.

    1995-01-01

    This manual explains the theory and operation of the finite-difference time domain code FDTD-ANT developed by Analex Corporation at the NASA Lewis Research Center in Cleveland, Ohio. This code can be used for solving electromagnetic problems that are electrically small or medium (on the order of 1 to 50 cubic wavelengths). Calculated parameters include transmission line impedance, relative effective permittivity, antenna input impedance, and far-field patterns in both the time and frequency domains. The maximum problem size may be adjusted according to the computer used. This code has been run on the DEC VAX and 486 PC's and on workstations such as the Sun Sparc and the IBM RS/6000.

  6. Crew behavior and performance in space analog environments

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.

    1992-01-01

    The objectives and the current status of the Crew Factors research program conducted at NASA-Ames Research Center are reviewed. The principal objectives of the program are to determine the effects of a broad class of input variables on crew performance and to provide guidance with respect to the design and management of crews assigned to future space missions. A wide range of research environments are utilized, including controlled experimental settings, high fidelity full mission simulator facilities, and fully operational field environments. Key group processes are identified, and preliminary data are presented on the effect of crew size, type, and structure on team performance.

  7. Evapotranspiration from nonuniform surfaces - A first approach for short-term numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Wetzel, Peter J.; Chang, Jy-Tai

    1988-01-01

    Observations of surface heterogeneity of soil moisture from scales of meters to hundreds of kilometers are discussed, and a relationship between grid element size and soil moisture variability is presented. An evapotranspiration model is presented which accounts for the variability of soil moisture, standing surface water, and vegetation internal and stomatal resistance to moisture flow from the soil. The mean values and standard deviations of these parameters are required as input to the model. Tests of this model against field observations are reported, and extensive sensitivity tests are presented which explore the importance of including subgrid-scale variability in an evapotranspiration model.

  8. Techno-economic assessment of a hybrid solar receiver and combustor

    NASA Astrophysics Data System (ADS)

    Lim, Jin Han; Nathan, Graham; Dally, Bassam; Chinnici, Alfonso

    2016-05-01

    A techno-economic analysis is performed to compare two different configurations of hybrid solar thermal systems with fossil fuel backup to provide continuous electricity output. The assessment compares a Hybrid Solar Receiver Combustor (HSRC), in which the functions of a solar cavity receiver and a combustor are integrated into a single device with a reference conventional solar thermal system using a regular solar cavity receiver with a backup boiler, termed the Solar Gas Hybrid (SGH). The benefits of the integration is assessed by varying the size of the storage capacity and heliostat field while maintaining the same overall thermal input to the power block.

  9. Monte Carlo simulation of TrueBeam flattening-filter-free beams using varian phase-space files: comparison with experimental data.

    PubMed

    Belosi, Maria F; Rodriguez, Miguel; Fogliata, Antonella; Cozzi, Luca; Sempau, Josep; Clivio, Alessandro; Nicolini, Giorgia; Vanetti, Eugenio; Krauss, Harald; Khamphan, Catherine; Fenoglietto, Pascal; Puxeu, Josep; Fedele, David; Mancosu, Pietro; Brualla, Lorenzo

    2014-05-01

    Phase-space files for Monte Carlo simulation of the Varian TrueBeam beams have been made available by Varian. The aim of this study is to evaluate the accuracy of the distributed phase-space files for flattening filter free (FFF) beams, against experimental measurements from ten TrueBeam Linacs. The phase-space files have been used as input in PRIMO, a recently released Monte Carlo program based on the PENELOPE code. Simulations of 6 and 10 MV FFF were computed in a virtual water phantom for field sizes 3 × 3, 6 × 6, and 10 × 10 cm(2) using 1 × 1 × 1 mm(3) voxels and for 20 × 20 and 40 × 40 cm(2) with 2 × 2 × 2 mm(3) voxels. The particles contained in the initial phase-space files were transported downstream to a plane just above the phantom surface, where a subsequent phase-space file was tallied. Particles were transported downstream this second phase-space file to the water phantom. Experimental data consisted of depth doses and profiles at five different depths acquired at SSD = 100 cm (seven datasets) and SSD = 90 cm (three datasets). Simulations and experimental data were compared in terms of dose difference. Gamma analysis was also performed using 1%, 1 mm and 2%, 2 mm criteria of dose-difference and distance-to-agreement, respectively. Additionally, the parameters characterizing the dose profiles of unflattened beams were evaluated for both measurements and simulations. Analysis of depth dose curves showed that dose differences increased with increasing field size and depth; this effect might be partly motivated due to an underestimation of the primary beam energy used to compute the phase-space files. Average dose differences reached 1% for the largest field size. Lateral profiles presented dose differences well within 1% for fields up to 20 × 20 cm(2), while the discrepancy increased toward 2% in the 40 × 40 cm(2) cases. Gamma analysis resulted in an agreement of 100% when a 2%, 2 mm criterion was used, with the only exception of the 40 × 40 cm(2) field (∼95% agreement). With the more stringent criteria of 1%, 1 mm, the agreement reduced to almost 95% for field sizes up to 10 × 10 cm(2), worse for larger fields. Unflatness and slope FFF-specific parameters are in line with the possible energy underestimation of the simulated results relative to experimental data. The agreement between Monte Carlo simulations and experimental data proved that the evaluated Varian phase-space files for FFF beams from TrueBeam can be used as radiation sources for accurate Monte Carlo dose estimation, especially for field sizes up to 10 × 10 cm(2), that is the range of field sizes mostly used in combination to the FFF, high dose rate beams.

  10. Aerodynamic analysis of three advanced configurations using the TranAir full-potential code

    NASA Technical Reports Server (NTRS)

    Madson, M. D.; Carmichael, R. L.; Mendoza, J. P.

    1989-01-01

    Computational results are presented for three advanced configurations: the F-16A with wing tip missiles and under wing fuel tanks, the Oblique Wing Research Aircraft, and an Advanced Turboprop research model. These results were generated by the latest version of the TranAir full potential code, which solves for transonic flow over complex configurations. TranAir embeds a surface paneled geometry definition in a uniform rectangular flow field grid, thus avoiding the use of surface conforming grids, and decoupling the grid generation process from the definition of the configuration. The new version of the code locally refines the uniform grid near the surface of the geometry, based on local panel size and/or user input. This method distributes the flow field grid points much more efficiently than the previous version of the code, which solved for a grid that was uniform everywhere in the flow field. TranAir results are presented for the three configurations and are compared with wind tunnel data.

  11. Magnetostrictive Micro Mirrors for an Optical Switch Matrix

    PubMed Central

    Lee, Heung-Shik; Cho, Chongdu; Cho, Myeong-Woo

    2007-01-01

    We have developed a wireless-controlled compact optical switch by silicon micromachining techniques with DC magnetron sputtering. For the optical switching operation, micro mirror is designed as cantilever shape size of 5mm×800μm×50μm. TbDyFe film is sputter-deposited on the upper side of the mirror with the condition as: Ar gas pressure below 1.2×10-9 torr, DC input power of 180W and heating temperature of up to 250°C for the wireless control of each component. Mirrors are actuated by externally applied magnetic fields for the micro application. Applied beam path can be changed according to the direction and the magnitude of applied magnetic field. Reflectivity changes, M-H curves and X-ray diffractions of sputtered mirrors are measured to determine magneto-optical, magneto-elastic properties with variation in sputtered film thickness. The deflected angle-magnetic field characteristics of the fabricated mirror are measured. PMID:28903221

  12. The Tin Bider Impact Structure, Algeria: New Map with Field Inputs on Structural Aspect

    NASA Astrophysics Data System (ADS)

    Kassab, F.; Belhai, D.

    2017-07-01

    The Tin Bider impact structure is a complex type composed by sedimentary target rocks. We realized a geological map including new inputs on impact characters of a recent field investigation where we identify shatter cone and folds.

  13. Climate Change Feedbacks from Interactions Between New and Old Carbon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dukes, Jeffrey S.; Phillips, Richard P.

    Priming effects, or responses of SOM decomposition rates to inputs of new, labile carbon (C), have the potential to dramatically alter projections of ecosystem C storage. Priming effects occur in most ecosystems, are significant in magnitude, and are highly sensitive to global changes. Nevertheless, our mechanistic understanding of priming effects remains poor, and this has prevented the inclusion of these dynamics into current Earth system models (ESMs). We conducted two manipulative experiments in the field to quantify how priming effects influence SOM dynamics. Specifically, we asked: To what extent do inputs of “new” root-derived carbon (C) influence “older” C inmore » SOM, and are the magnitude and direction of these effects sensitive to climate? We addressed these questions within the Boston-Area Climate Experiment - an old-field ecosystem that has been subjected to three precipitation treatments (ambient, -50%, and +50% of each precipitation event during the growing season) and four warming treatments (from ambient to +4°C) since 2008. In the first experiment, we installed root and fungal ingrowth cores into the plots. Each core was filled with SOM that had an isotopic signature (of its C compounds) that differed from the vegetation in the plots such that inputs of “new” C from roots/fungi could be quantified using the change in isotopic signatures of C in the cores. Further, we used cores with different mesh sizes to isolate root vs. mycorrhizal fungal inputs. We found that belowground C fluxes were dominated by root inputs (as opposed to mycorrhizal inputs), and that root-derived inputs were greatest in the plots subjected to experimental warming. Given that that the warming-induced increase in belowground C flux did not result in a net increase in soil C, we conclude that the warming treatment likely enhanced priming effects in these soils. In the second experiment, we experimentally dripped dissolved organic C compounds into soils in the BACE plots to simulate root-derived C fluxes. Specifically, we constructed artificial roots attached to an automated peristaltic pump to deliver the compounds to soil semi-continuously during the peak of the growing season. We found that changes in exudate quality had small but significant effects on microbial activities, often interacting with N availability and temperature-induced changes. These results further underscore the importance of priming effects, especially under warming conditions. Collectively, our results provide some of the first field-based estimates of how soil moisture and temperature can directly and indirectly alter root-induced changes in SOM dynamics. This exploratory project lays the groundwork for further research on priming that incorporates effects of plant species and microbial communities to global changes. Such information should enable the development of more mechanistic and better predictive models of SOM decomposition under increased greenhouse gas levels, with the ultimate goal of reducing the level of uncertainty in projections of future climate.« less

  14. Trends in Solidification Grain Size and Morphology for Additive Manufacturing of Ti-6Al-4V

    NASA Astrophysics Data System (ADS)

    Gockel, Joy; Sheridan, Luke; Narra, Sneha P.; Klingbeil, Nathan W.; Beuth, Jack

    2017-12-01

    Metal additive manufacturing (AM) is used for both prototyping and production of final parts. Therefore, there is a need to predict and control the microstructural size and morphology. Process mapping is an approach that represents AM process outcomes in terms of input variables. In this work, analytical, numerical, and experimental approaches are combined to provide a holistic view of trends in the solidification grain structure of Ti-6Al-4V across a wide range of AM process input variables. The thermal gradient is shown to vary significantly through the depth of the melt pool, which precludes development of fully equiaxed microstructure throughout the depth of the deposit within any practical range of AM process variables. A strategy for grain size control is demonstrated based on the relationship between melt pool size and grain size across multiple deposit geometries, and additional factors affecting grain size are discussed.

  15. Effect of plasma arc welding variables on fusion zone grain size and hardness of AISI 321 austenitic stainless steel

    NASA Astrophysics Data System (ADS)

    Kondapalli, S. P.

    2017-12-01

    In the present work, pulsed current microplasma arc welding is carried out on AISI 321 austenitic stainless steel of 0.3 mm thickness. Peak current, Base current, Pulse rate and Pulse width are chosen as the input variables, whereas grain size and hardness are considered as output responses. Response surface method is adopted by using Box-Behnken Design, and in total 27 experiments are performed. Empirical relation between input and output response is developed using statistical software and analysis of variance (ANOVA) at 95% confidence level to check the adequacy. The main effect and interaction effect of input variables on output response are also studied.

  16. The effect of environmental uncertainty on morphological design and fluid balance in Sarracenia purpurea L.

    PubMed

    Kingsolver, Joel

    1981-03-01

    To explore principles of organismic design in fluctuating environments, morphological design of the leaf of the pitcher-plant, Sarracenia purpurea, was studied for a population in northern Michigan. The design criterion focused upon the leaf shape and minimum size which effectively avoids leaf desiccation (complete loss of fluid from the leaf cavity) in the face of fluctuating rainfall and meteorological conditions. Bowl- and pitcher-shaped leaves were considered. Simulations show that the pitcher geometry experiences less frequent desiccation than bowls of the same size. Desiccation frequency is inversely related to leaf size; the size distribution of pitcher leaves in the field shows that the majority of pitchers desiccate only 1-3 times per season on average, while smaller pitchers may average up to 8 times per season. A linear filter model of an organism in a fluctuating environment is presented, in which the organism selectively filters the temporal patterns of environmental input. General measures of rainfall predictability based upon information theory and spectral analysis are consistent with the model of a pitcher leaf as a low-pass (frequency) filter which avoids desiccation by eliminating high-frequency rainfall variability.

  17. Laboratory Simulations of Haze Formation in the Atmospheres of Super-Earths and Mini-Neptunes: Particle Color and Size Distribution

    NASA Astrophysics Data System (ADS)

    He, Chao; Hörst, Sarah M.; Lewis, Nikole K.; Yu, Xinting; Moses, Julianne I.; Kempton, Eliza M.-R.; McGuiggan, Patricia; Morley, Caroline V.; Valenti, Jeff A.; Vuitton, Véronique

    2018-03-01

    Super-Earths and mini-Neptunes are the most abundant types of planets among the ∼3500 confirmed exoplanets, and are expected to exhibit a wide variety of atmospheric compositions. Recent transmission spectra of super-Earths and mini-Neptunes have demonstrated the possibility that exoplanets have haze/cloud layers at high altitudes in their atmospheres. However, the compositions, size distributions, and optical properties of these particles in exoplanet atmospheres are poorly understood. Here, we present the results of experimental laboratory investigations of photochemical haze formation within a range of planetary atmospheric conditions, as well as observations of the color and size of produced haze particles. We find that atmospheric temperature and metallicity strongly affect particle color and size, thus altering the particles’ optical properties (e.g., absorptivity, scattering, etc.); on a larger scale, this affects the atmospheric and surface temperature of the exoplanets, and their potential habitability. Our results provide constraints on haze formation and particle properties that can serve as critical inputs for exoplanet atmosphere modeling, and guide future observations of super-Earths and mini-Neptunes with the Transiting Exoplanet Survey Satellite, the James Webb Space Telescope, and the Wide-Field Infrared Survey Telescope.

  18. Development of the Code RITRACKS

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2013-01-01

    A document discusses the code RITRACKS (Relativistic Ion Tracks), which was developed to simulate heavy ion track structure at the microscopic and nanoscopic scales. It is a Monte-Carlo code that simulates the production of radiolytic species in water, event-by-event, and which may be used to simulate tracks and also to calculate dose in targets and voxels of different sizes. The dose deposited by the radiation can be calculated in nanovolumes (voxels). RITRACKS allows simulation of radiation tracks without the need of extensive knowledge of computer programming or Monte-Carlo simulations. It is installed as a regular application on Windows systems. The main input parameters entered by the user are the type and energy of the ion, the length and size of the irradiated volume, the number of ions impacting the volume, and the number of histories. The simulation can be started after the input parameters are entered in the GUI. The number of each kind of interactions for each track is shown in the result details window. The tracks can be visualized in 3D after the simulation is complete. It is also possible to see the time evolution of the tracks and zoom on specific parts of the tracks. The software RITRACKS can be very useful for radiation scientists to investigate various problems in the fields of radiation physics, radiation chemistry, and radiation biology. For example, it can be used to simulate electron ejection experiments (radiation physics).

  19. Dual-input two-compartment pharmacokinetic model of dynamic contrast-enhanced magnetic resonance imaging in hepatocellular carcinoma.

    PubMed

    Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun

    2016-04-07

    To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.

  20. 13-fold resolution gain through turbid layer via translated unknown speckle illumination

    PubMed Central

    Guo, Kaikai; Zhang, Zibang; Jiang, Shaowei; Liao, Jun; Zhong, Jingang; Eldar, Yonina C.; Zheng, Guoan

    2017-01-01

    Fluorescence imaging through a turbid layer holds great promise for various biophotonics applications. Conventional wavefront shaping techniques aim to create and scan a focus spot through the turbid layer. Finding the correct input wavefront without direct access to the target plane remains a critical challenge. In this paper, we explore a new strategy for imaging through turbid layer with a large field of view. In our setup, a fluorescence sample is sandwiched between two turbid layers. Instead of generating one focus spot via wavefront shaping, we use an unshaped beam to illuminate the turbid layer and generate an unknown speckle pattern at the target plane over a wide field of view. By tilting the input wavefront, we raster scan the unknown speckle pattern via the memory effect and capture the corresponding low-resolution fluorescence images through the turbid layer. Different from the wavefront-shaping-based single-spot scanning, the proposed approach employs many spots (i.e., speckles) in parallel for extending the field of view. Based on all captured images, we jointly recover the fluorescence object, the unknown optical transfer function of the turbid layer, the translated step size, and the unknown speckle pattern. Without direct access to the object plane or knowledge of the turbid layer, we demonstrate a 13-fold resolution gain through the turbid layer using the reported strategy. We also demonstrate the use of this technique to improve the resolution of a low numerical aperture objective lens allowing to obtain both large field of view and high resolution at the same time. The reported method provides insight for developing new fluorescence imaging platforms and may find applications in deep-tissue imaging. PMID:29359102

  1. Input Manipulation, Enhancement and Processing: Theoretical Views and Empirical Research

    ERIC Educational Resources Information Center

    Benati, Alessandro

    2016-01-01

    Researchers in the field of instructed second language acquisition have been examining the issue of how learners interact with input by conducting research measuring particular kinds of instructional interventions (input-oriented and meaning-based). These interventions include such things as input flood, textual enhancement and processing…

  2. A framework for detecting communities of unbalanced sizes in networks

    NASA Astrophysics Data System (ADS)

    Žalik, Krista Rizman; Žalik, Borut

    2018-01-01

    Community detection in large networks has been a focus of recent research in many of fields, including biology, physics, social sciences, and computer science. Most community detection methods partition the entire network into communities, groups of nodes that have many connections within communities and few connections between them and do not identify different roles that nodes can have in communities. We propose a community detection model that integrates more different measures that can fast identify communities of different sizes and densities. We use node degree centrality, strong similarity with one node from community, maximal similarity of node to community, compactness of communities and separation between communities. Each measure has its own strength and weakness. Thus, combining different measures can benefit from the strengths of each one and eliminate encountered problems of using an individual measure. We present a fast local expansion algorithm for uncovering communities of different sizes and densities and reveals rich information on input networks. Experimental results show that the proposed algorithm is better or as effective as the other community detection algorithms for both real-world and synthetic networks while it requires less time.

  3. Field effect transistors improve buffer amplifier

    NASA Technical Reports Server (NTRS)

    1967-01-01

    Unity gain buffer amplifier with a Field Effect Transistor /FET/ differential input stage responds much faster than bipolar transistors when operated at low current levels. The circuit uses a dual FET in a unity gain buffer amplifier having extremely high input impedance, low bias current requirements, and wide bandwidth.

  4. Effects of agricultural practices on organic matter degradation in ditches

    NASA Astrophysics Data System (ADS)

    Hunting, Ellard R.; Vonk, J. Arie; Musters, C. J. M.; Kraak, Michiel H. S.; Vijver, Martina G.

    2016-02-01

    Agricultural practices can result in differences in organic matter (OM) and agricultural chemical inputs in adjacent ditches, but its indirect effects on OM composition and its inherent consequences for ecosystem functioning remain uncertain. This study determined the effect of agricultural practices (dairy farm grasslands and hyacinth bulb fields) on OM degradation by microorganisms and invertebrates with a consumption and food preference experiment in the field and in the laboratory using natural OM collected from the field. Freshly cut grass and hyacinths were also offered to control for OM composition and large- and small mesh-sizes were used to distinguish microbial decomposition and invertebrate consumption. Results show that OM decomposition by microorganisms and consumption by invertebrates was similar throughout the study area, but that OM collected from ditches adjacent grasslands and freshly cut grass and hyacinths were preferred over OM collected from ditches adjacent to a hyacinth bulb field. In the case of OM collected from ditches adjacent hyacinth bulb fields, both microbial decomposition and invertebrate consumption were strongly retarded, likely resulting from sorption and accumulation of pesticides. This outcome illustrates that differences in agricultural practices can, in addition to direct detrimental effects on aquatic organisms, indirectly alter the functioning of adjacent aquatic ecosystems.

  5. Improvement of Galilean refractive beam shaping system for accurately generating near-diffraction-limited flattop beam with arbitrary beam size.

    PubMed

    Ma, Haotong; Liu, Zejin; Jiang, Pengzhi; Xu, Xiaojun; Du, Shaojun

    2011-07-04

    We propose and demonstrate the improvement of conventional Galilean refractive beam shaping system for accurately generating near-diffraction-limited flattop beam with arbitrary beam size. Based on the detailed study of the refractive beam shaping system, we found that the conventional Galilean beam shaper can only work well for the magnifying beam shaping. Taking the transformation of input beam with Gaussian irradiance distribution into target beam with high order Fermi-Dirac flattop profile as an example, the shaper can only work well at the condition that the size of input and target beam meets R(0) ≥ 1.3 w(0). For the improvement, the shaper is regarded as the combination of magnifying and demagnifying beam shaping system. The surface and phase distributions of the improved Galilean beam shaping system are derived based on Geometric and Fourier Optics. By using the improved Galilean beam shaper, the accurate transformation of input beam with Gaussian irradiance distribution into target beam with flattop irradiance distribution is realized. The irradiance distribution of the output beam is coincident with that of the target beam and the corresponding phase distribution is maintained. The propagation performance of the output beam is greatly improved. Studies of the influences of beam size and beam order on the improved Galilean beam shaping system show that restriction of beam size has been greatly reduced. This improvement can also be used to redistribute the input beam with complicated irradiance distribution into output beam with complicated irradiance distribution.

  6. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks

    PubMed Central

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-01-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. PMID:26291608

  7. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    PubMed

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-08-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.

  8. Dispersion and Input Control Capability in European Large Size Reverberant Acoustic Chambers

    NASA Astrophysics Data System (ADS)

    Yarza, A.; Lopez, J.; Ozores, E.

    2012-07-01

    The acoustic test in reverberant chamber is one of the load cases to be proved during the environmental test campaign that demonstrates the capability of a space- unit to survive the launch phase. The crucial requirement for the large size structures is often the survival of the acoustic vibration test, and can be defined as the design driver load case in many circumstances. In addition, the commercial market demands lighter structures as an objective to reduce costs. For an efficient optimisation of the product it is very important to have powerful structural analysis tools in order to obtain knowledge of the structural needs and to refine existing methods for the prediction of structural loads experienced during acoustic testing. In the same line, as part of the contributors involved in the test it is important to acquire knowledge of the characteristics of the reverberant chamber itself and the behaviour of the fluid. With this purpose, EADS CASA Espacio (ECE) has used the measured data of the parameters of the fluid extracted from test of the deployable reflectors validated in the past five years, with the final objective to improve and optimise the capability to face up the acoustic test. In this paper experimental data extracted from acoustic tests performed to space-units are presented. Information related to two European large size acoustic chambers are used. The pressure field inside the acoustic chamber has been post-processed with the objective to study the behaviour of the fluid during the test. The diffuseness of the pressure field and the control capability of the acoustic profile are parameters to be considered as contributors for the design of the structures. The homogeneity of the microphones’ measurements is taken into account to describe the dispersion of the pressure inside the reverberant chamber along the frequency domain. Upon of that, the capability of the facilities to control the input profile is analysed from a statistical point of view. The final conclusions allow defining the minimum tolerances to be considered based on the limits imposed by the chamber.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Jesse D.; Chang, Grace; Magalen, Jason

    A modified version of an indust ry standard wave modeling tool was evaluated, optimized, and utilized to investigate model sensitivity to input parameters a nd wave energy converter ( WEC ) array deployment scenarios. Wave propagation was investigated d ownstream of the WECs to evaluate overall near - and far - field effects of WEC arrays. The sensitivity study illustrate d that wave direction and WEC device type we r e most sensitive to the variation in the model parameters examined in this study . Generally, the changes in wave height we re the primary alteration caused by the presencemore » of a WEC array. Specifically, W EC device type and subsequently their size directly re sult ed in wave height variations; however, it is important to utilize ongoing laboratory studies and future field tests to determine the most appropriate power matrix values for a particular WEC device and configuration in order to improve modeling results .« less

  10. UV missile-plume signature model

    NASA Astrophysics Data System (ADS)

    Roblin, Antoine; Baudoux, Pierre E.; Chervet, Patrick

    2002-08-01

    A new 3D radiative code is used to solve the radiative transfer equation in the UV spectral domain for a nonequilibrium and axisymmetric media such as a rocket plume composed of hot reactive gases and metallic oxide particles like alumina. Calculations take into account the dominant chemiluminescence radiation mechanism and multiple scattering effects produced by alumina particles. Plume radiative properties are studied by using a simple cylindrical media of finite length, deduced from different aerothermochemical real rocket plume afterburning zones. Assumed a log-normal size distribution of alumina particles, optical properties are calculated by using Mie theory. Due to large uncertainties of particles properties, systematic tests have been performed in order to evaluate the influence of the different input data (refractive index, particle mean geometric radius) upon the radiance field. These computations will help us to define the set of parameters which need to be known accurately in order to compare computations with radiance measurements obtained during field experiments.

  11. Ultrasonic energy input influence οn the production of sub-micron o/w emulsions containing whey protein and common stabilizers.

    PubMed

    Kaltsa, O; Michon, C; Yanniotis, S; Mandala, I

    2013-05-01

    Ultrasonication may be a cost-effective emulsion formation technique, but its impact on emulsion final structure and droplet size needs to be further investigated. Olive oil emulsions (20wt%) were formulated (pH∼7) using whey protein (3wt%), three kinds of hydrocolloids (0.1-0.5wt%) and two different emulsification energy inputs (single- and two-stage, methods A and B, respectively). Formula and energy input effects on emulsion performance are discussed. Emulsions stability was evaluated over a 10-day storage period at 5°C recording the turbidity profiles of the emulsions. Optical micrographs, droplet size and viscosity values were also obtained. A differential scanning calorimetric (DSC) multiple cool-heat cyclic method (40 to -40°C) was performed to examine stability via crystallization phenomena of the dispersed phase. Ultrasonication energy input duplication from 11kJ to 25kJ (method B) resulted in stable emulsions production (reduction of back scattering values, dBS∼1% after 10days of storage) at 0.5wt% concentration of any of the stabilizers used. At lower gum amount samples became unstable due to depletion flocculation phenomena, regardless of emulsification energy input used. High energy input during ultrasonic emulsification also resulted in sub-micron oil-droplets emulsions (D(50)=0.615μm compared to D(50)=1.3μm using method A) with narrower particle size distribution and in viscosity reduction. DSC experiments revealed no presence of bulk oil formation, suggesting stability for XG 0.5wt% emulsions prepared by both methods. Reduced enthalpy values found when method B was applied suggesting structural modifications produced by extensive ultrasonication. Change of ultrasonication conditions results in significant changes of oil droplet size and stability of the produced emulsions. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. EPICS Input Output Controller (IOC) Record Reference Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, J.B.; Kraimer, M.R.

    1994-12-01

    This manual describes all supported EPICS record types. The first chapter gives introduction and describes the field summary table. The second chapter describes the fields in database common, i.e. the fields that are present in every record type. The third chapter describes the input and output field that are common to many record types and have the same usage wherever they are used. Following the third chapter is a separate chapter for each record type containing a description of all the fields for that record type except those in database common.

  13. Inhibition to excitation ratio regulates visual system responses and behavior in vivo.

    PubMed

    Shen, Wanhua; McKeown, Caroline R; Demas, James A; Cline, Hollis T

    2011-11-01

    The balance of inhibitory to excitatory (I/E) synaptic inputs is thought to control information processing and behavioral output of the central nervous system. We sought to test the effects of the decreased or increased I/E ratio on visual circuit function and visually guided behavior in Xenopus tadpoles. We selectively decreased inhibitory synaptic transmission in optic tectal neurons by knocking down the γ2 subunit of the GABA(A) receptors (GABA(A)R) using antisense morpholino oligonucleotides or by expressing a peptide corresponding to an intracellular loop of the γ2 subunit, called ICL, which interferes with anchoring GABA(A)R at synapses. Recordings of miniature inhibitory postsynaptic currents (mIPSCs) and miniature excitatory PSCs (mEPSCs) showed that these treatments decreased the frequency of mIPSCs compared with control tectal neurons without affecting mEPSC frequency, resulting in an ∼50% decrease in the ratio of I/E synaptic input. ICL expression and γ2-subunit knockdown also decreased the ratio of optic nerve-evoked synaptic I/E responses. We recorded visually evoked responses from optic tectal neurons, in which the synaptic I/E ratio was decreased. Decreasing the synaptic I/E ratio in tectal neurons increased the variance of first spike latency in response to full-field visual stimulation, increased recurrent activity in the tectal circuit, enlarged spatial receptive fields, and lengthened the temporal integration window. We used the benzodiazepine, diazepam (DZ), to increase inhibitory synaptic activity. DZ increased optic nerve-evoked inhibitory transmission but did not affect evoked excitatory currents, resulting in an increase in the I/E ratio of ∼30%. Increasing the I/E ratio with DZ decreased the variance of first spike latency, decreased spatial receptive field size, and lengthened temporal receptive fields. Sequential recordings of spikes and excitatory and inhibitory synaptic inputs to the same visual stimuli demonstrated that decreasing or increasing the I/E ratio disrupted input/output relations. We assessed the effect of an altered I/E ratio on a visually guided behavior that requires the optic tectum. Increasing and decreasing I/E in tectal neurons blocked the tectally mediated visual avoidance behavior. Because ICL expression, γ2-subunit knockdown, and DZ did not directly affect excitatory synaptic transmission, we interpret the results of our study as evidence that partially decreasing or increasing the ratio of I/E disrupts several measures of visual system information processing and visually guided behavior in an intact vertebrate.

  14. Historical trace metal accumulation in the sediments of an urbanized region of the Lake Champlain watershed, Burlington, Vermont

    USGS Publications Warehouse

    Mecray, E.L.; King, J.W.; Appleby, P.G.; Hunt, A.S.

    2001-01-01

    This study documents the history of pollution inputs in the Burlington region of Lake Champlain, Vermont using measurements of anthropogenic metals (Cu, Zn, Cr, Pb, Cd, and Ag) in four age-dated sediment cores. Sediments record a history of contamination in a region and can be used to assess the changing threat to biota over time and to evaluate the effectiveness of discharge regulations on anthropogenic inputs. Grain size, magnetic susceptibility, radiometric dating and pollen stratigraphy were combined with trace metal data to provide an assessment of the history of contamination over the last 350 yr in the Burlington region of Lake Champlain. Magnetic susceptibility was initially used to identify land-use history for each site because it is a proxy indicator of soil erosion. Historical trends in metal inputs in the Burlington region from the seventeenth through the twentieth centuries are reflected in downcore variations in metal concentrations and accumulation rates. Metal concentrations increase above background values in the early to mid nineteenth century. The metal input rate to the sediments increases around 1920 and maximum concentrations and accumulation rates are observed in the late 1960s. Decreases in concentration and accumulation rate between 1970 and the present are observed, for most metals. The observed trends are primarily a function of variations in anthropogenic inputs and not variations in sediment grain size. Grain size data were used to remove texture variations from the metal profiles and results show trends in the anthropogenic metal signals remain. Radiometric dating and pollen stratigraphy provide well-constrained dates for the sediments thereby allowing the metal profiles to be interpreted in terms of land-use history.This study documents the history of pollution inputs in the Burlington region of Lake Champlain, Vermont using measurements of anthropogenic metals (Cu, Zn, Cr, Pb, Cd, and Ag) in four age-dated sediment cores. Sediments record a history of contamination in a region and can be used to assess the changing threat to biota over time and to evaluate the effectiveness of discharge regulations on anthropogenic inputs. Grain size, magnetic susceptibility, radiometric dating and pollen stratigraphy were combined with trace metal data to provide an assessment of the history of contamination over the last 350 yr in the Burlington region of Lake Champlain. Magnetic susceptibility was initially used to identify land-use history for each site because it is a proxy indicator of soil erosion. Historical trends in metal inputs in the Burlington region from the seventeenth through the twentieth centuries are reflected in downcore variations in metal concentrations and accumulation rates. Metal concentrations increase above background values in the early to mid nineteenth century. The metal input rate to the sediments increases around 1920 and maximum concentrations and accumulation rates are observed in the late 1960s. Decreases in concentration and accumulation rate between 1970 and the present are observed for most metals. The observed trends are primarily a function of variations in anthropogenic inputs and not variations in sediment grain size. Grain size data were used to remove texture variations from the metal profiles and results show trends in the anthropogenic metal signals remain. Radiometric dating and pollen stratigraphy provide well-constrained dates for the sediments thereby allowing the metal profiles to be interpreted in terms of land-use history.

  15. Developing low-input, high-biomass, perennial cropping systems for advanced biofuels in the Intermountain West

    USDA-ARS?s Scientific Manuscript database

    Lignocellulosic biomass studies are being conducted to evaluate perennial herbaceous feedstocks and to determine their field performance and adaptation potential for biomass production in the Intermountain West. Field performance of four biomass entries and four inputs are being evaluated over a lo...

  16. Driven Boson Sampling.

    PubMed

    Barkhofen, Sonja; Bartley, Tim J; Sansoni, Linda; Kruse, Regina; Hamilton, Craig S; Jex, Igor; Silberhorn, Christine

    2017-01-13

    Sampling the distribution of bosons that have undergone a random unitary evolution is strongly believed to be a computationally hard problem. Key to outperforming classical simulations of this task is to increase both the number of input photons and the size of the network. We propose driven boson sampling, in which photons are input within the network itself, as a means to approach this goal. We show that the mean number of photons entering a boson sampling experiment can exceed one photon per input mode, while maintaining the required complexity, potentially leading to less stringent requirements on the input states for such experiments. When using heralded single-photon sources based on parametric down-conversion, this approach offers an ∼e-fold enhancement in the input state generation rate over scattershot boson sampling, reaching the scaling limit for such sources. This approach also offers a dramatic increase in the signal-to-noise ratio with respect to higher-order photon generation from such probabilistic sources, which removes the need for photon number resolution during the heralding process as the size of the system increases.

  17. Water vapour correction of the daily 1 km AVHRR global land dataset: Part I validation and use of the Water Vapour input field

    USGS Publications Warehouse

    DeFelice, Thomas P.; Lloyd, D.; Meyer, D.J.; Baltzer, T. T.; Piraina, P.

    2003-01-01

    An atmospheric correction algorithm developed for the 1 km Advanced Very High Resolution Radiometer (AVHRR) global land dataset was modified to include a near real-time total column water vapour data input field to account for the natural variability of atmospheric water vapour. The real-time data input field used for this study is the Television and Infrared Observational Satellite (TIROS) Operational Vertical Sounder (TOVS) Pathfinder A global total column water vapour dataset. It was validated prior to its use in the AVHRR atmospheric correction process using two North American AVHRR scenes, namely 13 June and 28 November 1996. The validation results are consistent with those reported by others and entail a comparison between TOVS, radiosonde, experimental sounding, microwave radiometer, and data from a hand-held sunphotometer. The use of this data layer as input to the AVHRR atmospheric correction process is discussed.

  18. Sparse coding can predict primary visual cortex receptive field changes induced by abnormal visual input.

    PubMed

    Hunt, Jonathan J; Dayan, Peter; Goodhill, Geoffrey J

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.

  19. Sparse Coding Can Predict Primary Visual Cortex Receptive Field Changes Induced by Abnormal Visual Input

    PubMed Central

    Hunt, Jonathan J.; Dayan, Peter; Goodhill, Geoffrey J.

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields. PMID:23675290

  20. Electrometer Amplifier With Overload Protection

    NASA Technical Reports Server (NTRS)

    Woeller, F. H.; Alexander, R.

    1986-01-01

    Circuit features low noise, input offset, and high linearity. Input preamplifier includes input-overload protection and nulling circuit to subtract dc offset from output. Prototype dc amplifier designed for use with ion detector has features desirable in general laboratory and field instrumentation.

  1. Snow stratigraphic heterogeneity within ground-based passive microwave radiometer footprints: Implications for emission modeling

    NASA Astrophysics Data System (ADS)

    Rutter, Nick; Sandells, Mel; Derksen, Chris; Toose, Peter; Royer, Alain; Montpetit, Benoit; Langlois, Alex; Lemmetyinen, Juha; Pulliainen, Jouni

    2014-03-01

    Two-dimensional measurements of snowpack properties (stratigraphic layering, density, grain size, and temperature) were used as inputs to the multilayer Helsinki University of Technology (HUT) microwave emission model at a centimeter-scale horizontal resolution, across a 4.5 m transect of ground-based passive microwave radiometer footprints near Churchill, Manitoba, Canada. Snowpack stratigraphy was complex (between six and eight layers) with only three layers extending continuously throughout the length of the transect. Distributions of one-dimensional simulations, accurately representing complex stratigraphic layering, were evaluated using measured brightness temperatures. Large biases (36 to 68 K) between simulated and measured brightness temperatures were minimized (-0.5 to 0.6 K), within measurement accuracy, through application of grain scaling factors (2.6 to 5.3) at different combinations of frequencies, polarizations, and model extinction coefficients. Grain scaling factors compensated for uncertainty relating optical specific surface area to HUT effective grain size inputs and quantified relative differences in scattering and absorption properties of various extinction coefficients. The HUT model required accurate representation of ice lenses, particularly at horizontal polarization, and large grain scaling factors highlighted the need to consider microstructure beyond the size of individual grains. As variability of extinction coefficients was strongly influenced by the proportion of large (hoar) grains in a vertical profile, it is important to consider simulations from distributions of one-dimensional profiles rather than single profiles, especially in sub-Arctic snowpacks where stratigraphic variability can be high. Model sensitivity experiments suggested that the level of error in field measurements and the new methodological framework used to apply them in a snow emission model were satisfactory. Layer amalgamation showed that a three-layer representation of snowpack stratigraphy reduced the bias of a one-layer representation by about 50%.

  2. An Arbitrary Waveform Wearable Neuro-stimulator System for Neurophysiology Research on Freely Behaving Animals.

    PubMed

    Samani, Mohsen Mosayebi; Mahnam, Amin; Hosseini, Nasrin

    2014-04-01

    Portable wireless neuro-stimulators have been developed to facilitate long-term cognitive and behavioral studies on the central nervous system in freely moving animals. These stimulators can provide precisely controllable input(s) to the nervous system, without distracting the animal attention with cables connected to its body. In this study, a low power backpack neuro-stimulator was developed for animal brain researches that can provides arbitrary stimulus waveforms for the stimulation, while it is small and light weight to be used for small animals including rats. The system consists of a controller that uses an RF link to program and activate a small and light microprocessor-based stimulator. A Howland current source was implemented to produce precise current controlled arbitrary waveform stimulations. The system was optimized for ultra-low power consumption and small size. The stimulator was first tested for its electrical specifications. Then its performance was evaluated in a rat experiment when electrical stimulation of medial longitudinal fasciculus induced circling behavior. The stimulator is capable of delivering programmed stimulations up to ± 2 mA with adjusting steps of 1 μA, accuracy of 0.7% and compliance of 6 V. The stimulator is 15 mm × 20 mm × 40 mm in size, weights 13.5 g without battery and consumes a total power of only 5.l mW. In the experiment, the rat could easily carry the stimulator and demonstrated the circling behavior for 0.1 ms current pulses of above 400 μA. The developed system has a competitive size and weight, whereas providing a wide range of operation and the flexibility of generating arbitrary stimulation patterns ideal for long-term experiments in the field of cognitive and neuroscience research.

  3. Fish and fire: Post-wildfire sediment dynamics and implications for the viability of trout populations

    NASA Astrophysics Data System (ADS)

    Murphy, B. P.; Czuba, J. A.; Belmont, P.; Budy, P.; Finch, C.

    2017-12-01

    Episodic events in steep landscapes, such as wildfire and mass wasting, contribute large pulses of sediment to rivers and can significantly alter the quality and connectivity of fish habitat. Understanding where these sediment inputs occur, how they are transported and processed through the watershed, and their geomorphic effect on the river network is critical to predicting the impact on ecological aquatic communities. The Tushar Mountains of southern Utah experienced a severe wildfire in 2010, resulting in numerous debris flows and the extirpation of trout populations. Following many years of habitat and ecological monitoring in the field, we have developed a modeling framework that links post-wildfire debris flows, fluvial sediment routing, and population ecology in order to evaluate the impact and response of trout to wildfire. First, using the Tushar topographic and wildfire parameters, as well as stochastic precipitation generation, we predict the post-wildfire debris flow probabilities and volumes of mainstem tributaries using the Cannon et al. [2010] model. This produces episodic hillslope sediment inputs, which are delivered to a fluvial sediment, river-network routing model (modified from Czuba et al. [2017]). In this updated model, sediment transport dynamics are driven by time-varying discharge associated with the stochastic precipitation generation, include multiple grain sizes (including gravel), use mixed-size transport equations (Wilcock & Crowe [2003]), and incorporate channel slope adjustments with aggradation and degradation. Finally, with the spatially explicit adjustments in channel bed elevation and grain size, we utilize a new population viability analysis (PVA) model to predict the impact and recovery of fish populations in response to these changes in habitat. Our model provides a generalizable framework for linking physical and ecological models and for evaluating the extirpation risk of isolated fish populations throughout the Intermountain West to the increasing threat of wildfire.

  4. The SLH framework for modeling quantum input-output networks

    DOE PAGES

    Combes, Joshua; Kerckhoff, Joseph; Sarovar, Mohan

    2017-09-04

    Here, many emerging quantum technologies demand precise engineering and control over networks consisting of quantum mechanical degrees of freedom connected by propagating electromagnetic fields, or quantum input-output networks. Here we review recent progress in theory and experiment related to such quantum input-output networks, with a focus on the SLH framework, a powerful modeling framework for networked quantum systems that is naturally endowed with properties such as modularity and hierarchy. We begin by explaining the physical approximations required to represent any individual node of a network, e.g. atoms in cavity or a mechanical oscillator, and its coupling to quantum fields bymore » an operator triple ( S,L,H). Then we explain how these nodes can be composed into a network with arbitrary connectivity, including coherent feedback channels, using algebraic rules, and how to derive the dynamics of network components and output fields. The second part of the review discusses several extensions to the basic SLH framework that expand its modeling capabilities, and the prospects for modeling integrated implementations of quantum input-output networks. In addition to summarizing major results and recent literature, we discuss the potential applications and limitations of the SLH framework and quantum input-output networks, with the intention of providing context to a reader unfamiliar with the field.« less

  5. The SLH framework for modeling quantum input-output networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combes, Joshua; Kerckhoff, Joseph; Sarovar, Mohan

    Here, many emerging quantum technologies demand precise engineering and control over networks consisting of quantum mechanical degrees of freedom connected by propagating electromagnetic fields, or quantum input-output networks. Here we review recent progress in theory and experiment related to such quantum input-output networks, with a focus on the SLH framework, a powerful modeling framework for networked quantum systems that is naturally endowed with properties such as modularity and hierarchy. We begin by explaining the physical approximations required to represent any individual node of a network, e.g. atoms in cavity or a mechanical oscillator, and its coupling to quantum fields bymore » an operator triple ( S,L,H). Then we explain how these nodes can be composed into a network with arbitrary connectivity, including coherent feedback channels, using algebraic rules, and how to derive the dynamics of network components and output fields. The second part of the review discusses several extensions to the basic SLH framework that expand its modeling capabilities, and the prospects for modeling integrated implementations of quantum input-output networks. In addition to summarizing major results and recent literature, we discuss the potential applications and limitations of the SLH framework and quantum input-output networks, with the intention of providing context to a reader unfamiliar with the field.« less

  6. The Synergy of Class Size Reduction and Classroom Quality

    ERIC Educational Resources Information Center

    Graue, Elizabeth; Rauscher, Erica; Sherfinski, Melissa

    2009-01-01

    A contextual approach to understanding class size reduction includes attention to both educational inputs and processes. Based on our study of a class size reduction program in Wisconsin we explore the following question: How do class size reduction and classroom quality interact to produce learning opportunities in early elementary classrooms? To…

  7. Analyzing panel acoustic contributions toward the sound field inside the passenger compartment of a full-size automobile.

    PubMed

    Wu, Sean F; Moondra, Manmohan; Beniwal, Ravi

    2015-04-01

    The Helmholtz equation least squares (HELS)-based nearfield acoustical holography (NAH) is utilized to analyze panel acoustic contributions toward the acoustic field inside the interior region of an automobile. Specifically, the acoustic power flows from individual panels are reconstructed, and relative contributions to sound pressure level and spectrum at any point of interest are calculated. Results demonstrate that by correlating the acoustic power flows from individual panels to the field acoustic pressure, one can correctly locate the panel allowing the most acoustic energy transmission into the vehicle interior. The panel on which the surface acoustic pressure amplitude is the highest should not be used as indicative of the panel responsible for the sound field in the vehicle passenger compartment. Another significant advantage of this HELS-based NAH is that measurements of the input data only need to be taken once by using a conformal array of microphones in the near field, and ranking of panel acoustic contributions to any field point can be readily performed. The transfer functions between individual panels of any vibrating structure to the acoustic pressure anywhere in space are calculated not measured, thus significantly reducing the time and effort involved in panel acoustic contributions analyses.

  8. Neural networks with local receptive fields and superlinear VC dimension.

    PubMed

    Schmitt, Michael

    2002-04-01

    Local receptive field neurons comprise such well-known and widely used unit types as radial basis function (RBF) neurons and neurons with center-surround receptive field. We study the Vapnik-Chervonenkis (VC) dimension of feedforward neural networks with one hidden layer of these units. For several variants of local receptive field neurons, we show that the VC dimension of these networks is superlinear. In particular, we establish the bound Omega(W log k) for any reasonably sized network with W parameters and k hidden nodes. This bound is shown to hold for discrete center-surround receptive field neurons, which are physiologically relevant models of cells in the mammalian visual system, for neurons computing a difference of gaussians, which are popular in computational vision, and for standard RBF neurons, a major alternative to sigmoidal neurons in artificial neural networks. The result for RBF neural networks is of particular interest since it answers a question that has been open for several years. The results also give rise to lower bounds for networks with fixed input dimension. Regarding constants, all bounds are larger than those known thus far for similar architectures with sigmoidal neurons. The superlinear lower bounds contrast with linear upper bounds for single local receptive field neurons also derived here.

  9. Construction of trypanosome artificial mini-chromosomes.

    PubMed Central

    Lee, M G; E, Y; Axelrod, N

    1995-01-01

    We report the preparation of two linear constructs which, when transformed into the procyclic form of Trypanosoma brucei, become stably inherited artificial mini-chromosomes. Both of the two constructs, one of 10 kb and the other of 13 kb, contain a T.brucei PARP promoter driving a chloramphenicol acetyltransferase (CAT) gene. In the 10 kb construct the CAT gene is followed by one hygromycin phosphotransferase (Hph) gene, and in the 13 kb construct the CAT gene is followed by three tandemly linked Hph genes. At each end of these linear molecules are telomere repeats and subtelomeric sequences. Electroporation of these linear DNA constructs into the procyclic form of T.brucei generated hygromycin-B resistant cell lines. In these cell lines, the input DNA remained linear and bounded by the telomere ends, but it increased in size. In the cell lines generated by the 10 kb construct, the input DNA increased in size to 20-50 kb. In the cell lines generated by the 13 kb constructs, two sizes of linear DNAs containing the input plasmid were detected: one of 40-50 kb and the other of 150 kb. The increase in size was not the result of in vivo tandem repetitions of the input plasmid, but represented the addition of new sequences. These Hph containing linear DNA molecules were maintained stably in cell lines for at least 20 generations in the absence of drug selection and were subsequently referred to as trypanosome artificial mini-chromosomes, or TACs. Images PMID:8532534

  10. Vastly accelerated linear least-squares fitting with numerical optimization for dual-input delay-compensated quantitative liver perfusion mapping.

    PubMed

    Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal

    2018-04-01

    To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Monte Carlo simulation of TrueBeam flattening-filter-free beams using Varian phase-space files: Comparison with experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belosi, Maria F.; Fogliata, Antonella, E-mail: antonella.fogliata-cozzi@eoc.ch, E-mail: afc@iosi.ch; Cozzi, Luca

    2014-05-15

    Purpose: Phase-space files for Monte Carlo simulation of the Varian TrueBeam beams have been made available by Varian. The aim of this study is to evaluate the accuracy of the distributed phase-space files for flattening filter free (FFF) beams, against experimental measurements from ten TrueBeam Linacs. Methods: The phase-space files have been used as input in PRIMO, a recently released Monte Carlo program based on thePENELOPE code. Simulations of 6 and 10 MV FFF were computed in a virtual water phantom for field sizes 3 × 3, 6 × 6, and 10 × 10 cm{sup 2} using 1 × 1more » × 1 mm{sup 3} voxels and for 20 × 20 and 40 × 40 cm{sup 2} with 2 × 2 × 2 mm{sup 3} voxels. The particles contained in the initial phase-space files were transported downstream to a plane just above the phantom surface, where a subsequent phase-space file was tallied. Particles were transported downstream this second phase-space file to the water phantom. Experimental data consisted of depth doses and profiles at five different depths acquired at SSD = 100 cm (seven datasets) and SSD = 90 cm (three datasets). Simulations and experimental data were compared in terms of dose difference. Gamma analysis was also performed using 1%, 1 mm and 2%, 2 mm criteria of dose-difference and distance-to-agreement, respectively. Additionally, the parameters characterizing the dose profiles of unflattened beams were evaluated for both measurements and simulations. Results: Analysis of depth dose curves showed that dose differences increased with increasing field size and depth; this effect might be partly motivated due to an underestimation of the primary beam energy used to compute the phase-space files. Average dose differences reached 1% for the largest field size. Lateral profiles presented dose differences well within 1% for fields up to 20 × 20 cm{sup 2}, while the discrepancy increased toward 2% in the 40 × 40 cm{sup 2} cases. Gamma analysis resulted in an agreement of 100% when a 2%, 2 mm criterion was used, with the only exception of the 40 × 40 cm{sup 2} field (∼95% agreement). With the more stringent criteria of 1%, 1 mm, the agreement reduced to almost 95% for field sizes up to 10 × 10 cm{sup 2}, worse for larger fields. Unflatness and slope FFF-specific parameters are in line with the possible energy underestimation of the simulated results relative to experimental data. Conclusions: The agreement between Monte Carlo simulations and experimental data proved that the evaluated Varian phase-space files for FFF beams from TrueBeam can be used as radiation sources for accurate Monte Carlo dose estimation, especially for field sizes up to 10 × 10 cm{sup 2}, that is the range of field sizes mostly used in combination to the FFF, high dose rate beams.« less

  12. Testing of information condensation in a model reverberating spiking neural network.

    PubMed

    Vidybida, Alexander

    2011-06-01

    Information about external world is delivered to the brain in the form of structured in time spike trains. During further processing in higher areas, information is subjected to a certain condensation process, which results in formation of abstract conceptual images of external world, apparently, represented as certain uniform spiking activity partially independent on the input spike trains details. Possible physical mechanism of condensation at the level of individual neuron was discussed recently. In a reverberating spiking neural network, due to this mechanism the dynamics should settle down to the same uniform/ periodic activity in response to a set of various inputs. Since the same periodic activity may correspond to different input spike trains, we interpret this as possible candidate for information condensation mechanism in a network. Our purpose is to test this possibility in a network model consisting of five fully connected neurons, particularly, the influence of geometric size of the network, on its ability to condense information. Dynamics of 20 spiking neural networks of different geometric sizes are modelled by means of computer simulation. Each network was propelled into reverberating dynamics by applying various initial input spike trains. We run the dynamics until it becomes periodic. The Shannon's formula is used to calculate the amount of information in any input spike train and in any periodic state found. As a result, we obtain explicit estimate of the degree of information condensation in the networks, and conclude that it depends strongly on the net's geometric size.

  13. Callosal Influence on Visual Receptive Fields Has an Ocular, an Orientation-and Direction Bias.

    PubMed

    Conde-Ocazionez, Sergio A; Jungen, Christiane; Wunderle, Thomas; Eriksson, David; Neuenschwander, Sergio; Schmidt, Kerstin E

    2018-01-01

    One leading hypothesis on the nature of visual callosal connections (CC) is that they replicate features of intrahemispheric lateral connections. However, CC act also in the central part of the binocular visual field. In agreement, early experiments in cats indicated that they provide the ipsilateral eye part of binocular receptive fields (RFs) at the vertical midline (Berlucchi and Rizzolatti, 1968), and play a key role in stereoscopic function. But until today callosal inputs to receptive fields activated by one or both eyes were never compared simultaneously, because callosal function has been often studied by cutting or lesioning either corpus callosum or optic chiasm not allowing such a comparison. To investigate the functional contribution of CC in the intact cat visual system we recorded both monocular and binocular neuronal spiking responses and receptive fields in the 17/18 transition zone during reversible deactivation of the contralateral hemisphere. Unexpectedly from many of the previous reports, we observe no change in ocular dominance during CC deactivation. Throughout the transition zone, a majority of RFs shrink, but several also increase in size. RFs are significantly more affected for ipsi- as opposed to contralateral stimulation, but changes are also observed with binocular stimulation. Noteworthy, RF shrinkages are tiny and not correlated to the profound decreases of monocular and binocular firing rates. They depend more on orientation and direction preference than on eccentricity or ocular dominance of the receiving neuron's RF. Our findings confirm that in binocularly viewing mammals, binocular RFs near the midline are constructed via the direct geniculo-cortical pathway. They also support the idea that input from the two eyes complement each other through CC: Rather than linking parts of RFs separated by the vertical meridian, CC convey a modulatory influence, reflecting the feature selectivity of lateral circuits, with a strong cardinal bias.

  14. Integration of Well & Core Data of Carbonate Reservoirs with Surface Seismic in Garraf Oil Field, Southern Iraq

    NASA Astrophysics Data System (ADS)

    Mhuder, J. J.; Muhlhl, A. A.; Basra Geologiests

    2013-05-01

    The Garraf Field is situated in Southern Iraq in Nasiriya area, is located in Mesopotamian basin. The carbonate facies are dominant in main reservoirs in Garraf field (Mishrif and Yammama Formations) which is Cretaceous in age. The structure of the reservoir in this field are low relief gentle anticlinal structure aligned in NW to SE direction, and No fault were observed and interpreted in 3D seismic section. 3D seismic survey by Iraqi Oil Exploration Company No 2 was successfully conducted on the Garraf field at 2008-2009 using recording system SERCEL 408UL and Vibrators Nomad 65. Bin size: 25*25, Fold: 36, SP Interval: 50m, Lines Interval: 300m, 3 wells were drilled Ga (1, 2, 3) and it used for seismic to well tie in Petrel. Data analysis was conducted for each reservoirs for Lithological and sedimentological studies were based on core and well data .The study showed That the Mishrif Formation deposited in a broad carbonate platform with shallowing upward regressive succession and The depositional environment is extending from outer marine to shallow middle-inner shelf settings with restricted lagoons as supported by the present of Miliolid fossils. The fragmented rudist biostromes accumulated in the middle shelf. No rudist reef is presence in the studied cores. While the Major sequences are micritic limestone of lagoonal and oolitic/peloidal grainstone sandy shoal separated by mudstone of Yamama formation. Sedimentation feature are seen on seismic attributes and it is help for understanding of sedimentation environment and suitable structure interpretation. There is good relationship between Acustic Impedance and porosity, Acustic Impedance reflects porosity or facies change of carbonate rather than fluid content. Data input used for 3D Modeling include 3D seismic and AI data, petrophysical analysis, core and thin section description. 3D structure modeling were created base on the geophysical data interpretation and Al analysis. Data analysis for Al data were run as secondary input for 3D properties modeling.

  15. Perpendicular momentum input of lower hybrid waves and its influence on driving plasma rotation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Xiaoyin

    The mechanism of perpendicular momentum input of lower hybrid waves and its influence on plasma rotation are studied. Discussion for parallel momentum input of lower hybrid waves is presented for comparison. It is found out that both toroidal and poloidal projections of perpendicular momentum input of lower hybrid waves are stronger than those of parallel momentum input. The perpendicular momentum input of lower hybrid waves therefore plays a dominant role in forcing the changes of rotation velocity observed during lower hybrid current drive. Lower hybrid waves convert perpendicular momentum carried by the waves into the momentum of dc electromagnetic fieldmore » by inducing a resonant-electron flow across flux surfaces therefore charge separation and a radial dc electric field. The dc field releases its momentum into plasma through the Lorentz force acting on the radial return current driven by the radial electric field. Plasma is spun up by the Lorentz force. An improved quasilinear theory with gyro-phase dependent distribution function is developed to calculate the radial flux of resonant electrons. Rotations are determined by a set of fluid equations for bulk electrons and ions, which are solved numerically by applying a finite-difference method. Analytical expressions for toroidal and poloidal rotations are derived using the same hydrodynamic model.« less

  16. Project W-320, 241-C-106 sluicing HVAC calculations, Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, J.W.

    1998-08-07

    This supporting document has been prepared to make the FDNW calculations for Project W-320, readily retrievable. The report contains the following calculations: Exhaust airflow sizing for Tank 241-C-106; Equipment sizing and selection recirculation fan; Sizing high efficiency mist eliminator; Sizing electric heating coil; Equipment sizing and selection of recirculation condenser; Chiller skid system sizing and selection; High efficiency metal filter shielding input and flushing frequency; and Exhaust skid stack sizing and fan sizing.

  17. Experimental observation of self excited co-rotating multiple vortices in a dusty plasma with inhomogeneous plasma background

    NASA Astrophysics Data System (ADS)

    Choudhary, Mangilal; Mukherjee, S.; Bandyopadhyay, P.

    2017-03-01

    We report an experimental observation of multiple co-rotating vortices in an extended dust column in the background of an inhomogeneous diffused plasma. An inductively coupled rf discharge is initiated in the background of argon gas in the source region. This plasma was later found to diffuse into the main experimental chamber. A secondary DC glow discharge plasma is produced to introduce dust particles into the plasma volume. These micron-sized poly-disperse dust particles get charged in the background of the DC plasma and are transported by the ambipolar electric field of the diffused plasma. These transported particles are found to be confined in an electrostatic potential well, where the resultant electric field due to the diffused plasma (ambipolar E-field) and glass wall charging (sheath E-field) holds the micron-sized particles against the gravity. Multiple co-rotating (anti-clockwise) dust vortices are observed in the dust cloud for a particular discharge condition. The transition from multiple vortices to a single dust vortex is observed when input rf power is lowered. The occurrence of these vortices is explained on the basis of the charge gradient of dust particles, which is orthogonal to the ion drag force. The charge gradient is a consequence of the plasma inhomogeneity along the dust cloud length. The detailed nature and the reason for multiple vortices are still under investigation through further experiments; however, preliminary qualitative understanding is discussed based on the characteristic scale length of the dust vortex. There is a characteristic size of the vortex in the dusty plasma; therefore, multiple vortices could possibly be formed in an extended dusty plasma with inhomogeneous plasma background. The experimental results on the vortex motion of particles are compared with a theoretical model and are found to be in close agreement.

  18. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  19. Estimates of the location of L-type Ca2+ channels in motoneurons of different sizes: a computational study.

    PubMed

    Grande, Giovanbattista; Bui, Tuan V; Rose, P Ken

    2007-06-01

    In the presence of monoamines, L-type Ca(2+) channels on the dendrites of motoneurons contribute to persistent inward currents (PICs) that can amplify synaptic inputs two- to sixfold. However, the exact location of the L-type Ca(2+) channels is controversial, and the importance of the location as a means of regulating the input-output properties of motoneurons is unknown. In this study, we used a computational strategy developed previously to estimate the dendritic location of the L-type Ca(2+) channels and test the hypothesis that the location of L-type Ca(2+) channels varies as a function of motoneuron size. Compartmental models were constructed based on dendritic trees of five motoneurons that ranged in size from small to large. These models were constrained by known differences in PIC activation reported for low- and high-conductance motoneurons and the relationship between somatic PIC threshold and the presence or absence of tonic excitatory or inhibitory synaptic activity. Our simulations suggest that L-type Ca(2+) channels are concentrated in hotspots whose distance from the soma increases with the size of the dendritic tree. Moving the hotspots away from these sites (e.g., using the hotspot locations from large motoneurons on intermediate-sized motoneurons) fails to replicate the shifts in PIC threshold that occur experimentally during tonic excitatory or inhibitory synaptic activity. In models equipped with a size-dependent distribution of L-type Ca(2+) channels, the amplification of synaptic current by PICs depends on motoneuron size and the location of the synaptic input on the dendritic tree.

  20. Generalization of some hidden subgroup algorithms for input sets of arbitrary size

    NASA Astrophysics Data System (ADS)

    Poslu, Damla; Say, A. C. Cem

    2006-05-01

    We consider the problem of generalizing some quantum algorithms so that they will work on input domains whose cardinalities are not necessarily powers of two. When analyzing the algorithms we assume that generating superpositions of arbitrary subsets of basis states whose cardinalities are not necessarily powers of two perfectly is possible. We have taken Ballhysa's model as a template and have extended it to Chi, Kim and Lee's generalizations of the Deutsch-Jozsa algorithm and to Simon's algorithm. With perfectly equal superpositions of input sets of arbitrary size, Chi, Kim and Lee's generalized Deutsch-Jozsa algorithms, both for evenly-distributed and evenly-balanced functions, worked with one-sided error property. For Simon's algorithm the success probability of the generalized algorithm is the same as that of the original for input sets of arbitrary cardinalities with equiprobable superpositions, since the property that the measured strings are all those which have dot product zero with the string we search, for the case where the function is 2-to-1, is not lost.

  1. Potential flow theory and operation guide for the panel code PMARC

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.; Dudley, Michael R.; Iguchi, Steve K.; Browne, Lindsey; Katz, Joseph

    1991-01-01

    The theoretical basis for PMARC, a low-order potential-flow panel code for modeling complex three-dimensional geometries, is outlined. Several of the advanced features currently included in the code, such as internal flow modeling, a simple jet model, and a time-stepping wake model, are discussed in some detail. The code is written using adjustable size arrays so that it can be easily redimensioned for the size problem being solved and the computer hardware being used. An overview of the program input is presented, with a detailed description of the input available in the appendices. Finally, PMARC results for a generic wing/body configuration are compared with experimental data to demonstrate the accuracy of the code. The input file for this test case is given in the appendices.

  2. High-performance reconfigurable coincidence counting unit based on a field programmable gate array.

    PubMed

    Park, Byung Kwon; Kim, Yong-Su; Kwon, Osung; Han, Sang-Wook; Moon, Sung

    2015-05-20

    We present a high-performance reconfigurable coincidence counting unit (CCU) using a low-end field programmable gate array (FPGA) and peripheral circuits. Because of the flexibility guaranteed by the FPGA program, we can easily change system parameters, such as internal input delays, coincidence configurations, and the coincidence time window. In spite of a low-cost implementation, the proposed CCU architecture outperforms previous ones in many aspects: it has 8 logic inputs and 4 coincidence outputs that can measure up to eight-fold coincidences. The minimum coincidence time window and the maximum input frequency are 0.47 ns and 163 MHz, respectively. The CCU will be useful in various experimental research areas, including the field of quantum optics and quantum information.

  3. Biorefinery of the macroalgae Ulva lactuca: extraction of proteins and carbohydrates by mild disintegration.

    PubMed

    Postma, P R; Cerezo-Chinarro, O; Akkerman, R J; Olivieri, G; Wijffels, R H; Brandenburg, W A; Eppink, M H M

    2018-01-01

    The effect of osmotic shock, enzymatic incubation, pulsed electric field, and high shear homogenization on the release of water-soluble proteins and carbohydrates from the green alga Ulva lactuca was investigated in this screening study. For osmotic shock, both temperature and incubation time had a significant influence on the release with an optimum at 30 °C for 24 h of incubation. For enzymatic incubation, pectinase demonstrated being the most promising enzyme for both protein and carbohydrate release. Pulsed electric field treatment was most optimal at an electric field strength of 7.5 kV cm -1 with 0.05 ms pulses and a specific energy input relative to the released protein as low as 6.6 kWh kg prot -1 . Regarding literature, this study reported the highest protein (~ 39%) and carbohydrate (~ 51%) yields of the four technologies using high shear homogenization. Additionally, an energy reduction up to 86% was achieved by applying a novel two-phase (macrostructure size reduction and cell disintegration) technique.

  4. Milled cereal straw accelerates earthworm (Lumbricus terrestris) growth more than selected organic amendments.

    PubMed

    Sizmur, Tom; Martin, Elodie; Wagner, Kevin; Parmentier, Emilie; Watts, Chris; Whitmore, Andrew P

    2017-05-01

    Earthworms benefit agriculture by providing several ecosystem services. Therefore, strategies to increase earthworm abundance and activity in agricultural soils should be identified, and encouraged. Lumbricus terrestris earthworms primarily feed on organic inputs to soils but it is not known which organic amendments are the most effective for increasing earthworm populations. We conducted earthworm surveys in the field and carried out experiments in single-earthworm microcosms to determine the optimum food source for increasing earthworm biomass using a selection of crop residues and organic wastes available to agriculture. We found that although farmyard manure increased earthworm populations more than cereal straw in the field, straw increased earthworm biomass more than manures when milled and applied to microcosms. Earthworm growth rates were positively correlated with the calorific value of the amendment and straw had a much higher calorific value than farmyard manure, greenwaste compost, or anaerobic digestate. Reducing the particle size of straw by milling to <3 mm made the energy in the straw more accessible to earthworms. The benefits and barriers to applying milled straw to arable soils in the field are discussed.

  5. Fatigue of extracted lead zirconate titanate multilayer actuators under unipolar high field electric cycling

    NASA Astrophysics Data System (ADS)

    Wang, Hong; Lee, Sung-Min; Wang, James L.; Lin, Hua-Tay

    2014-12-01

    Testing of large prototype lead zirconate titanate (PZT) stacks presents substantial technical challenges to electronic testing systems, so an alternative approach that uses subunits extracted from prototypes has been pursued. Extracted 10-layer and 20-layer plate specimens were subjected to an electric cycle test under an electric field of 3.0/0.0 kV/mm, 100 Hz to 108 cycles. The effects of measurement field level and stack size (number of PZT layers) on the fatigue responses of piezoelectric and dielectric coefficients were observed. On-line monitoring permitted examination of the fatigue response of the PZT stacks. The fatigue rate (based on on-line monitoring) and the fatigue index (based on the conductance spectrum from impedance measurement or small signal measurement) were developed to quantify the fatigue status of the PZT stacks. The controlling fatigue mechanism was analyzed against the fatigue observations. The data presented can serve as input to design optimization of PZT stacks and to operation optimization in critical applications, such as piezoelectric fuel injectors in heavy-duty diesel engines.

  6. Fatigue of extracted lead zirconate titanate multilayer actuators under unipolar high field electric cycling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hong; Lee, Sung Min; Wang, James L.

    Testing of large prototype lead zirconate titanate (PZT) stacks presents substantial technical challenges to electronic testing systems, so an alternative approach that uses subunits extracted from prototypes has been pursued. Extracted 10-layer and 20-layer plate specimens were subjected to an electric cycle test under an electric field of 3.0/0.0 kV/mm, 100 Hz to 10^8 cycles. The effects of measurement field level and stack size (number of PZT layers) on the fatigue responses of piezoelectric and dielectric coefficients were observed. On-line monitoring permitted examination of the fatigue response of the PZT stacks. The fatigue rate (based on on-line monitoring) and themore » fatigue index (based on the conductance spectrum from impedance measurement or small signal measurement) were developed to quantify the fatigue status of the PZT stacks. The controlling fatigue mechanism was analyzed against the fatigue observations. The data presented can serve as input to design optimization of PZT stacks and to operation optimization in critical applications such as piezoelectric fuel injectors in heavy-duty diesel engines.« less

  7. Fatigue of extracted lead zirconate titanate multilayer actuators under unipolar high field electric cycling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hong, E-mail: wangh@ornl.gov; Lee, Sung-Min; Wang, James L.

    Testing of large prototype lead zirconate titanate (PZT) stacks presents substantial technical challenges to electronic testing systems, so an alternative approach that uses subunits extracted from prototypes has been pursued. Extracted 10-layer and 20-layer plate specimens were subjected to an electric cycle test under an electric field of 3.0/0.0 kV/mm, 100 Hz to 10{sup 8} cycles. The effects of measurement field level and stack size (number of PZT layers) on the fatigue responses of piezoelectric and dielectric coefficients were observed. On-line monitoring permitted examination of the fatigue response of the PZT stacks. The fatigue rate (based on on-line monitoring) and the fatiguemore » index (based on the conductance spectrum from impedance measurement or small signal measurement) were developed to quantify the fatigue status of the PZT stacks. The controlling fatigue mechanism was analyzed against the fatigue observations. The data presented can serve as input to design optimization of PZT stacks and to operation optimization in critical applications, such as piezoelectric fuel injectors in heavy-duty diesel engines.« less

  8. Fatigue of extracted lead zirconate titanate multilayer actuators under unipolar high field electric cycling

    DOE PAGES

    Wang, Hong; Lee, Sung Min; Wang, James L.; ...

    2014-12-19

    Testing of large prototype lead zirconate titanate (PZT) stacks presents substantial technical challenges to electronic testing systems, so an alternative approach that uses subunits extracted from prototypes has been pursued. Extracted 10-layer and 20-layer plate specimens were subjected to an electric cycle test under an electric field of 3.0/0.0 kV/mm, 100 Hz to 10^8 cycles. The effects of measurement field level and stack size (number of PZT layers) on the fatigue responses of piezoelectric and dielectric coefficients were observed. On-line monitoring permitted examination of the fatigue response of the PZT stacks. The fatigue rate (based on on-line monitoring) and themore » fatigue index (based on the conductance spectrum from impedance measurement or small signal measurement) were developed to quantify the fatigue status of the PZT stacks. The controlling fatigue mechanism was analyzed against the fatigue observations. The data presented can serve as input to design optimization of PZT stacks and to operation optimization in critical applications such as piezoelectric fuel injectors in heavy-duty diesel engines.« less

  9. Floc size distributions of suspended kaolinite in an advection transport dominated tank: measurements and modeling

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoteng; Maa, Jerome P.-Y.

    2017-11-01

    In estuaries and coastal waters, floc size and its statistical distributions of cohesive sediments are of primary importance, due to their effects on the settling velocity and thus deposition rates of cohesive aggregates. The development of a robust flocculation model that includes the predictions of floc size distributions (FSDs), however, is still in a research stage. In this study, a one-dimensional longitudinal (1-DL) flocculation model along a streamtube is developed. This model is based on solving the population balance equation to find the FSDs by using the quadrature method of moments. To validate this model, a laboratory experiment is carried out to produce an advection transport-dominant environment in a cylindrical tank. The flow field is generated by a marine pump mounted at the bottom center, with its outlet facing upward. This setup generates an axially symmetric flow which is measured by an acoustic Doppler velocimeter (ADV). The measurement results provide the hydrodynamic input data required for this 1-DL model. The other measurement results, the FSDs, are acquired by using an automatic underwater camera system and the resulting images are analyzed to validate the predicted FSDs. This study shows that the FSDs as well as their representative sizes can be efficiently and reasonably simulated by this 1-DL model.

  10. Halophilic viruses with varying biochemical and biophysical properties are amenable to purification with asymmetrical flow field-flow fractionation.

    PubMed

    Eskelin, Katri; Lampi, Mirka; Meier, Florian; Moldenhauer, Evelin; Bamford, Dennis H; Oksanen, Hanna M

    2017-11-01

    Viruses come in various shapes and sizes, and a number of viruses originate from extremities, e.g. high salinity or elevated temperature. One challenge for studying extreme viruses is to find efficient purification conditions where viruses maintain their infectivity. Asymmetrical flow field-flow fractionation (AF4) is a gentle native chromatography-like technique for size-based separation. It does not have solid stationary phase and the mobile phase composition is readily adjustable according to the sample needs. Due to the high separation power of specimens up to 50 µm, AF4 is suitable for virus purification. Here, we applied AF4 for extremophilic viruses representing four morphotypes: lemon-shaped, tailed and tailless icosahedral, as well as pleomorphic enveloped. AF4 was applied to input samples of different purity: crude supernatants of infected cultures, polyethylene glycol-precipitated viruses and viruses purified by ultracentrifugation. All four virus morphotypes were successfully purified by AF4. AF4 purification of culture supernatants or polyethylene glycol-precipitated viruses yielded high recoveries, and the purities were comparable to those obtained by the multistep ultracentrifugation purification methods. In addition, we also demonstrate that AF4 is a rapid monitoring tool for virus production in slowly growing host cells living in extreme conditions.

  11. The interference of electronic implants in low frequency electromagnetic fields.

    PubMed

    Silny, J

    2003-04-01

    Electronic implants such as cardiac pacemakers or nerve stimulators can be impaired in different ways by amplitude-modulated and even continuous electric or magnetic fields of strong field intensities. For the implant bearer, possible consequences of a temporary electromagnetic interference may range from a harmless impairment of his well-being to a perilous predicament. Electromagnetic interferences in all types of implants cannot be covered here due to their various locations in the body and their different sensing systems. Therefore, this presentation focuses exemplarily on the most frequently used implant, the cardiac pacemaker. In case of an electromagnetic interference the cardiac pacemaker reacts by switching to inhibition mode or to fast asynchronous pacing. At a higher disturbance voltage on the input of the pacemaker, a regular asynchronous pacing is likely to arise. In particular, the first-named interference could be highly dangerous for the pacemaker patient. The interference threshold of cardiac pacemakers depends in a complex way on a number of different factors such as: electromagnetic immunity and adjustment of the pacemaker, the composition of the applied low-frequency fields (only electric or magnetic fields or combinations of both), their frequencies and modulations, the type of pacemaker system (bipolar, unipolar) and its location in the body, as well as the body size and orientation in the field, and last but not least, certain physiological conditions of the patient (e.g. inhalation, exhalation). In extensive laboratory studies we have investigated the interference mechanisms in more than 100 cardiac pacemakers (older types as well as current models) and the resulting worst-case conditions for pacemaker patients in low-frequency electric and magnetic fields. The verification of these results in different practical everyday-life situations, e.g. in the fields of high-voltage overhead lines or those of electronic article surveillance systems is currently in progress. In case of the vertically-oriented electric 50 Hz fields preliminary results show that per 1 kV/m unimpaired electrical field strength (rms) an interference voltage of about 400 microVpp as worst-case could occur at the input of a unipolar ventricularly controlled, left-pectorally implanted cardiac pacemaker. Thus, already a field strength above ca. 5 kV/m could cause an interference with an implanted pacemaker. The magnetic fields induces an electric disturbance voltage at the input of the pacemaker. The body and the pacemaker system compose several induction loops, whose induced voltages rates add or subtract. The effective area of one representing inductive loop ranges from 100 to 221 cm2. For the unfavourable left-pectorally implantated and atrially-controlled pacemaker with a low interference threshold, the interference threshold ranges between 552 and 16 microT (rms) for magnetic fields at frequencies between 10 and 250 Hz. On this basis the occurrence of interferences with implanted pacemakers is possible in everyday-life situations. But experiments demonstrate a low probability of interference of cardiac pacemakers in practical situations. This apparent contradiction can be explained by a very small band of inhibition in most pacemakers and, in comparison with the worst-case, deviating conditions.

  12. PSO-MISMO modeling strategy for multistep-ahead time series prediction.

    PubMed

    Bao, Yukun; Xiong, Tao; Hu, Zhongyi

    2014-05-01

    Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.

  13. U.S. Geological Survey assessment concepts for conventional petroleum accumulations: Chapter 24 in Petroleum systems and geologic assessment of oil and gas in the San Joaquin Basin Province, California

    USGS Publications Warehouse

    Schmoker, James W.; Klett, T.R.

    2007-01-01

    Conventional petroleum accumulations are discrete fields or pools localized in structural or stratigraphic traps by the buoyancy of oil or gas in water; they float, bubble-like, in water. This report describes the fundamental concepts supporting the U.S. Geological Survey “Seventh Approximation” model for resource assessments of conventional accumulations. The Seventh Approximation provides a strategy for estimating volumes of undiscovered petroleum (oil, gas, and coproducts) having the potential to be added to reserves in a 30-year forecast span. The assessment of an area requires (1) choice of a minimum accumulation size, (2) assignment of geologic and access risk, and (3) estimation of the number and sizes of undiscovered accumulations in the assessment area. The combination of these variables yields probability distributions for potential additions to reserves. Assessment results are controlled by geology-based input parameters supplied by knowledgeable geologists, as opposed to projections of historical trends.

  14. Sampling, testing and modeling particle size distribution in urban catch basins.

    PubMed

    Garofalo, G; Carbone, M; Piro, P

    2014-01-01

    The study analyzed the particle size distribution of particulate matter (PM) retained in two catch basins located, respectively, near a parking lot and a traffic intersection with common high levels of traffic activity. Also, the treatment performance of a filter medium was evaluated by laboratory testing. The experimental treatment results and the field data were then used as inputs to a numerical model which described on a qualitative basis the hydrological response of the two catchments draining into each catch basin, respectively, and the quality of treatment provided by the filter during the measured rainfall. The results show that PM concentrations were on average around 300 mg/L (parking lot site) and 400 mg/L (road site) for the 10 rainfall-runoff events observed. PM with a particle diameter of <45 μm represented 40-50% of the total PM mass. The numerical model showed that a catch basin with a filter unit can remove 30 to 40% of the PM load depending on the storm characteristics.

  15. Cellulose polymorphy, crystallite size, and the Segal crystallinity index

    USDA-ARS?s Scientific Manuscript database

    The X-ray diffraction-based Segal Crystallinity Index (CI) was calculated for simulated different sizes of crystallites for cellulose I' and II. The Mercury software was used, and different crystallite sizes were based on different input peak widths at half of the maximum peak intensity (pwhm). The ...

  16. Dataset on the mean, standard deviation, broad-sense heritability and stability of wheat quality bred in three different ways and grown under organic and low-input conventional systems.

    PubMed

    Rakszegi, Marianna; Löschenberger, Franziska; Hiltbrunner, Jürg; Vida, Gyula; Mikó, Péter

    2016-06-01

    An assessment was previously made of the effects of organic and low-input field management systems on the physical, grain compositional and processing quality of wheat and on the performance of varieties developed using different breeding methods ("Comparison of quality parameters of wheat varieties with different breeding origin under organic and low-input conventional conditions" [1]). Here, accompanying data are provided on the performance and stability analysis of the genotypes using the coefficient of variation and the 'ranking' and 'which-won-where' plots of GGE biplot analysis for the most important quality traits. Broad-sense heritability was also evaluated and is given for the most important physical and quality properties of the seed in organic and low-input management systems, while mean values and standard deviation of the studied properties are presented separately for organic and low-input fields.

  17. Influence of the mode of deformation on recrystallisation behaviour of titanium through experiments, mean field theory and phase field model

    NASA Astrophysics Data System (ADS)

    Athreya, C. N.; Mukilventhan, A.; Suwas, Satyam; Vedantam, Srikanth; Subramanya Sarma, V.

    2018-04-01

    The influence of the mode of deformation on recrystallisation behaviour of Ti was studied by experiments and modelling. Ti samples were deformed through torsion and rolling to the same equivalent strain of 0.5. The deformed samples were annealed at different temperatures for different time durations and the recrystallisation kinetics were compared. Recrystallisation is found to be faster in the rolled samples compared to the torsion deformed samples. This is attributed to the differences in stored energy and number of nuclei per unit area in the two modes of deformation. Considering decay in stored energy during recrystallisation, the grain boundary mobility was estimated through a mean field model. The activation energy for recrystallisation obtained from experiments matched with the activation energy for grain boundary migration obtained from mobility calculation. A multi-phase field model (with mobility estimated from the mean field model as a constitutive input) was used to simulate the kinetics, microstructure and texture evolution. The recrystallisation kinetics and grain size distributions obtained from experiments matched reasonably well with the phase field simulations. The recrystallisation texture predicted through phase field simulations compares well with experiments though few additional texture components are present in simulations. This is attributed to the anisotropy in grain boundary mobility, which is not accounted for in the present study.

  18. Memory-Augmented Cellular Automata for Image Analysis.

    DTIC Science & Technology

    1978-11-01

    case in which each cell has memory size proportional to the logarithm of the input size, showing the increased capabilities of these machines for executing a variety of basic image analysis and recognition tasks. (Author)

  19. Extrapolation of rotating sound fields.

    PubMed

    Carley, Michael

    2018-03-01

    A method is presented for the computation of the acoustic field around a tonal circular source, such as a rotor or propeller, based on an exact formulation which is valid in the near and far fields. The only input data required are the pressure field sampled on a cylindrical surface surrounding the source, with no requirement for acoustic velocity or pressure gradient information. The formulation is approximated with exponentially small errors and appears to require input data at a theoretically minimal number of points. The approach is tested numerically, with and without added noise, and demonstrates excellent performance, especially when compared to extrapolation using a far-field assumption.

  20. How Much Input Do You Need to Learn the Most Frequent 9,000 Words?

    ERIC Educational Resources Information Center

    Nation, Paul

    2014-01-01

    This study looks at how much input is needed to gain enough repetition of the 1st 9,000 words of English for learning to occur. It uses corpora of various sizes and composition to see how many tokens of input would be needed to gain at least twelve repetitions and to meet most of the words at eight of the nine 1000 word family levels. Corpus sizes…

  1. Zero-dynamics principle for perfect quantum memory in linear networks

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naoki; James, Matthew R.

    2014-07-01

    In this paper, we study a general linear networked system that contains a tunable memory subsystem; that is, it is decoupled from an optical field for state transportation during the storage process, while it couples to the field during the writing or reading process. The input is given by a single photon state or a coherent state in a pulsed light field. We then completely and explicitly characterize the condition required on the pulse shape achieving the perfect state transfer from the light field to the memory subsystem. The key idea to obtain this result is the use of zero-dynamics principle, which in our case means that, for perfect state transfer, the output field during the writing process must be a vacuum. A useful interpretation of the result in terms of the transfer function is also given. Moreover, a four-node network composed of atomic ensembles is studied as an example, demonstrating how the input field state is transferred to the memory subsystem and what the input pulse shape to be engineered for perfect memory looks like.

  2. Surrogate modelling for the prediction of spatial fields based on simultaneous dimensionality reduction of high-dimensional input/output spaces.

    PubMed

    Crevillén-García, D

    2018-04-01

    Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.

  3. Experimental investigation of conical bubble structure and acoustic flow structure in ultrasonic field.

    PubMed

    Ma, Xiaojian; Huang, Biao; Wang, Guoyu; Zhang, Mindi

    2017-01-01

    The objective of this paper is to investigate the transient conical bubble structure (CBS) and acoustic flow structure in ultrasonic field. In the experiment, the high-speed video and particle image velocimetry (PIV) techniques are used to measure the acoustic cavitation patterns, as well as the flow velocity and vorticity fields. Results are presented for a high power ultrasound with a frequency of 18kHz, and the range of the input power is from 50W to 250W. The results of the experiment show the input power significantly affects the structures of CBS, with the increase of input power, the cavity region of CBS and the velocity of bubbles increase evidently. For the transient motion of bubbles on radiating surface, two different types could be classified, namely the formation, aggregation and coalescence of cavitation bubbles, and the aggregation, shrink, expansion and collapse of bubble cluster. Furthermore, the thickness of turbulent boundary layer near the sonotrode region is found to be much thicker, and the turbulent intensities are much higher for relatively higher input power. The vorticity distribution is prominently affected by the spatial position and input power. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. The series product for gaussian quantum input processes

    NASA Astrophysics Data System (ADS)

    Gough, John E.; James, Matthew R.

    2017-02-01

    We present a theory for connecting quantum Markov components into a network with quantum input processes in a Gaussian state (including thermal and squeezed). One would expect on physical grounds that the connection rules should be independent of the state of the input to the network. To compute statistical properties, we use a version of Wicks' theorem involving fictitious vacuum fields (Fock space based representation of the fields) and while this aids computation, and gives a rigorous formulation, the various representations need not be unitarily equivalent. In particular, a naive application of the connection rules would lead to the wrong answer. We establish the correct interconnection rules, and show that while the quantum stochastic differential equations of motion display explicitly the covariances (thermal and squeezing parameters) of the Gaussian input fields we introduce the Wick-Stratonovich form which leads to a way of writing these equations that does not depend on these covariances and so corresponds to the universal equations written in terms of formal quantum input processes. We show that a wholly consistent theory of quantum open systems in series can be developed in this way, and as required physically, is universal and in particular representation-free.

  5. Autonomous Environment-Monitoring Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2004-01-01

    Autonomous environment-monitoring networks (AEMNs) are artificial neural networks that are specialized for recognizing familiarity and, conversely, novelty. Like a biological neural network, an AEMN receives a constant stream of inputs. For purposes of computational implementation, the inputs are vector representations of the information of interest. As long as the most recent input vector is similar to the previous input vectors, no action is taken. Action is taken only when a novel vector is encountered. Whether a given input vector is regarded as novel depends on the previous vectors; hence, the same input vector could be regarded as familiar or novel, depending on the context of previous input vectors. AEMNs have been proposed as means to enable exploratory robots on remote planets to recognize novel features that could merit closer scientific attention. AEMNs could also be useful for processing data from medical instrumentation for automated monitoring or diagnosis. The primary substructure of an AEMN is called a spindle. In its simplest form, a spindle consists of a central vector (C), a scalar (r), and algorithms for changing C and r. The vector C is constructed from all the vectors in a given continuous stream of inputs, such that it is minimally distant from those vectors. The scalar r is the distance between C and the most remote vector in the same set. The construction of a spindle involves four vital parameters: setup size, spindle-population size, and the radii of two novelty boundaries. The setup size is the number of vectors that are taken into account before computing C. The spindle-population size is the total number of input vectors used in constructing the spindle counting both those that arrive before and those that arrive after the computation of C. The novelty-boundary radii are distances from C that partition the neighborhood around C into three concentric regions (see Figure 1). During construction of the spindle, the changing spindle radius is denoted by h. It is the final value of h, reached before beginning construction on the next spindle, that is denoted by r. During construction of a spindle, if a new vector falls between C and the inner boundary, the vector is regarded as completely familiar and no action is taken. If the new vector falls into the region between the inner and outer boundaries, it is considered unusual enough to warrant the adjustment of C and r by use of the aforementioned algorithms, but not unusual enough to be considered novel. If a vector falls outside the outer boundary, it is considered novel, in which case one of several appropriate responses could be initiation of construction of a new spindle.

  6. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE PAGES

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...

    2018-03-26

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  7. Investigation of Flow Structures Downstream of SAPIEN 3, CoreValve, and PERIMOUNT Magna Using Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Barakat, Mohammed; Lengsfeld, Corinne; Dvir, Danny; Azadani, Ali

    2017-11-01

    Transcatheter aortic valves provide superior systolic hemodynamic performance in terms of valvular pressure gradient and effective orifice area compared with equivalent size surgical bioprostheses. However, in depth investigation of the flow field structures is of interest to examine the flow field characteristics and provide experimental evidence necessary for validation of computational models. The goal of this study was to compare flow field characteristics of the three most commonly used transcatheter and surgical valves using phase-locked particle image velocimetry (PIV). 26mm SAPIEN 3, 26mm CoreValve, and 25mm PERIMOUNT Magna were examined in a pulse duplicator with input parameters matching ISO-5840. A 2D PIV system was used to obtain the velocity fields. Flow velocity and shear stress were obtained during the entire cardiac cycle. In-vitro testing showed that mean gradient was lowest for SAPIEN 3, followed by CoreValve and PERIMOUNT Magna. In all the valves, the peak jet velocity and maximum viscous shear stress were 2 m/s and 2 MPa, respectively. In conclusion, PIV was used to investigate flow field downstream of the three bioprostheses. Viscous shear stress was low and consequently shear-induced thrombotic trauma or shear-induced damage to red blood cells is unlikely.

  8. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  9. Beyond Poverty: Engaging with Input in Generative SLA

    ERIC Educational Resources Information Center

    Rankin, Tom; Unsworth, Sharon

    2016-01-01

    A generative approach to language acquisition is no different from any other in assuming that target language input is crucial for language acquisition. This discussion note addresses the place of input in generative second language acquisition (SLA) research and the perception in the wider field of SLA research that generative SLA…

  10. Field verification of KDOT's Superpave mixture properties to be used as inputs in the NCHRP mechanistic-empirical pavement design guide.

    DOT National Transportation Integrated Search

    2009-01-01

    In the MechanisticEmpirical Pavement Design Guide (M-EPDG), prediction of flexible pavement response and performance needs an input of dynamic modulus of hot-mix asphalt (HMA) at all three levels of hierarchical inputs. This study was intended to ...

  11. Experimental Optoelectronic Associative Memory

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1992-01-01

    Optoelectronic associative memory responds to input image by displaying one of M remembered images. Which image to display determined by optoelectronic analog computation of resemblance between input image and each remembered image. Does not rely on precomputation and storage of outer-product synapse matrix. Size of memory needed to store and process images reduced.

  12. Study of Nonclassical Fields in Phase-Sensitive Reservoirs

    NASA Technical Reports Server (NTRS)

    Kim, Myung Shik; Imoto, Nobuyuki

    1996-01-01

    We show that the reservoir influence can be modeled by an infinite array of beam splitters. The superposition of the input fields in the beam splitter is discussed with the convolution laws for their quasiprobabilities. We derive the Fokker-Planck equation for the cavity field coupled with a phase-sensitive reservoir using the convolution law. We also analyze the amplification in the phase-sensitive reservoir with use of the modified beam splitter model. We show the similarities and differences between the dissipation and amplification models. We show that a super-Poissonian input field cannot become sub-Poissonian by the phase-sensitive amplification.

  13. Mode-selective mapping and control of vectorial nonlinear-optical processes in multimode photonic-crystal fibers.

    PubMed

    Hu, Ming-Lie; Wang, Ching-Yue; Song, You-Jian; Li, Yan-Feng; Chai, Lu; Serebryannikov, Evgenii; Zheltikov, Aleksei

    2006-02-06

    We demonstrate an experimental technique that allows a mapping of vectorial nonlinear-optical processes in multimode photonic-crystal fibers (PCFs). Spatial and polarization modes of PCFs are selectively excited in this technique by varying the tilt angle of the input beam and rotating the polarization of the input field. Intensity spectra of the PCF output plotted as a function of the input field power and polarization then yield mode-resolved maps of nonlinear-optical interactions in multimode PCFs, facilitating the analysis and control of nonlinear-optical transformations of ultrashort laser pulses in such fibers.

  14. Detection of briefly flashed sine-gratings in dark-adapted vision.

    PubMed

    Hofmann, M I; Barnes, C S; Hallett, P E

    1990-01-01

    Scotopic contrast sensitivity was measured near 20 deg retinal eccentricity for briefly flashed (10 or 20 msec) sine-wave gratings presented in darkness to dark-adapted subjects. For very low spatial frequencies (0.2-0.5 c/deg), curves of contrast sensitivity vs luminous energy show evidence of a low rod plateau and a high scotopic region, with an intervening transition at around -2 to -2.5 log scot td sec. Similar measurements made using long flashed or flickering gratings do not show a plateau. The results suggest that vision in the low rod region is impaired for brief flashes. For the briefly flashed stimuli, curves of contrast sensitivity versus spatial frequency in the low region were best fit by simple Gaussian functions with a variable centre size (sigma c = 0.5----0.25 deg), size decreasing with increasing flash energy. Difference-of-Gaussian functions with constant centre size (sigma c = 0.25 deg) provided the best fit in the high region. Overt input from the cones and grating area artefacts are excluded by appropriate tests. Calculation of photon flux into the receptive field centres suggests that signal compression in P alpha ganglion cells contributes to the low rod plateau.

  15. Test Input Generation for Red-Black Trees using Abstraction

    NASA Technical Reports Server (NTRS)

    Visser, Willem; Pasareanu, Corina S.; Pelanek, Radek

    2005-01-01

    We consider the problem of test input generation for code that manipulates complex data structures. Test inputs are sequences of method calls from the data structure interface. We describe test input generation techniques that rely on state matching to avoid generation of redundant tests. Exhaustive techniques use explicit state model checking to explore all the possible test sequences up to predefined input sizes. Lossy techniques rely on abstraction mappings to compute and store abstract versions of the concrete states; they explore under-approximations of all the possible test sequences. We have implemented the techniques on top of the Java PathFinder model checker and we evaluate them using a Java implementation of red-black trees.

  16. Minimalist Approach to Complexity: Templating the Assembly of DNA Tile Structures with Sequentially Grown Input Strands.

    PubMed

    Lau, Kai Lin; Sleiman, Hanadi F

    2016-07-26

    Given its highly predictable self-assembly properties, DNA has proven to be an excellent template toward the design of functional materials. Prominent examples include the remarkable complexity provided by DNA origami and single-stranded tile (SST) assemblies, which require hundreds of unique component strands. However, in many cases, the majority of the DNA assembly is purely structural, and only a small "working area" needs to be aperiodic. On the other hand, extended lattices formed by DNA tile motifs require only a few strands; but they suffer from lack of size control and limited periodic patterning. To overcome these limitations, we adopt a templation strategy, where an input strand of DNA dictates the size and patterning of resultant DNA tile structures. To prepare these templating input strands, a sequential growth technique developed in our lab is used, whereby extended DNA strands of defined sequence and length may be generated simply by controlling their order of addition. With these, we demonstrate the periodic patterning of size-controlled double-crossover (DX) and triple-crossover (TX) tile structures, as well as intentionally designed aperiodicity of a DX tile structure. As such, we are able to prepare size-controlled DNA structures featuring aperiodicity only where necessary with exceptional economy and efficiency.

  17. Robotics control using isolated word recognition of voice input

    NASA Technical Reports Server (NTRS)

    Weiner, J. M.

    1977-01-01

    A speech input/output system is presented that can be used to communicate with a task oriented system. Human speech commands and synthesized voice output extend conventional information exchange capabilities between man and machine by utilizing audio input and output channels. The speech input facility is comprised of a hardware feature extractor and a microprocessor implemented isolated word or phrase recognition system. The recognizer offers a medium sized (100 commands), syntactically constrained vocabulary, and exhibits close to real time performance. The major portion of the recognition processing required is accomplished through software, minimizing the complexity of the hardware feature extractor.

  18. How the Size of Our Social Network Influences Our Semantic Skills

    ERIC Educational Resources Information Center

    Lev-Ari, Shiri

    2016-01-01

    People differ in the size of their social network, and thus in the properties of the linguistic input they receive. This article examines whether differences in social network size influence individuals' linguistic skills in their native language, focusing on global comprehension of evaluative language. Study 1 exploits the natural variation in…

  19. Kinect the dots: 3D control of optical tweezers

    NASA Astrophysics Data System (ADS)

    Shaw, Lucy; Preece, Daryl; Rubinsztein-Dunlop, Halina

    2013-07-01

    Holographically generated optical traps confine micron- and sub-micron sized particles close to the center of focused light beams. They also provide a way of trapping multiple particles and moving them in three dimensions. However, in many systems the user interface is not always advantageous or intuitive especially for collaborative work and when depth information is required. We discuss and evaluate a set of multi-beam optical tweezers that utilize off the shelf gaming technology to facilitate user interaction. We use the Microsoft Kinect sensor bar as a way of getting the user input required to generate arbitrary optical force fields and control optically trapped particles. We demonstrate that the system can also be used for dynamic light control.

  20. Iris unwrapping using the Bresenham circle algorithm for real-time iris recognition

    NASA Astrophysics Data System (ADS)

    Carothers, Matthew T.; Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.

    2015-02-01

    An efficient parallel architecture design for the iris unwrapping process in a real-time iris recognition system using the Bresenham Circle Algorithm is presented in this paper. Based on the characteristics of the model parameters this algorithm was chosen over the widely used polar conversion technique as the iris unwrapping model. The architecture design is parallelized to increase the throughput of the system and is suitable for processing an inputted image size of 320 × 240 pixels in real-time using Field Programmable Gate Array (FPGA) technology. Quartus software is used to implement, verify, and analyze the design's performance using the VHSIC Hardware Description Language. The system's predicted processing time is faster than the modern iris unwrapping technique used today∗.

  1. Identification of optimal mask size parameter for noise filtering in 99mTc-methylene diphosphonate bone scintigraphy images.

    PubMed

    Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-11-01

    Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.

  2. [Effects of reduced nitrogen application and soybean intercropping on nitrogen balance of sugarcane field].

    PubMed

    Liu, Yu; Zhang, Ying; Yang, Wen-ting; Li, Zhi-xian; Guan, Ao-mei

    2015-03-01

    A four-year (2010-2013) field experiment was carried out to explore the effects of three planting patterns (sugarcane, soybean monoculture and sugarcane-soybean 1:2 intercropping) with two nitrogen input levels (300 and 525 kg . hm-2) on soybean nitrogen fixation, sugarcane and soybean nitrogen accumulation, and ammonia volatilization and nitrogen leaching in sugarcane field. The results showed that the soybean nitrogen fixation efficiency (NFE) of sugarcane-soybean inter-cropping was lower than that of soybean monoculture. There was no significant difference in NFE among the treatments with the two nitrogen application rates. The nitrogen application rate and inter-cropping did not remarkably affect nitrogen accumulation of sugarcane and soybean. The ammonia volatilization of the reduced nitrogen input treatment was significantly lower than that of the conventional nitrogen input treatment. Furthermore, there was no significant difference in nitrogen leaching at different nitrogen input levels and among different planting patterns. The sugarcane field nitrogen balance analysis indicated that the nitrogen application rate dominated the nitrogen budget of sugarcane field. During the four-year experiment, all treatments leaved a nitrogen surplus (from 73.10 to 400.03 kg . hm-2) , except a nitrogen deficit of 66.22 kg . hm-2 in 2011 in the treatment of sugarcane monoculture with the reduced nitrogen application. The excessive nitrogen surplus might increase the risk of nitrogen pollution in the field. In conclusion, sugarcane-soybean intercropping with reduced nitrogen application is feasible to practice in consideration of enriching the soil fertility, reducing nitrogen pollution and saving production cost in sugarcane field.

  3. Monolithically integrated bacteriorhodopsin-GaAs/GaAlAs phototransceiver.

    PubMed

    Shin, Jonghyun; Bhattacharya, Pallab; Xu, Jian; Váró, György

    2004-10-01

    A monolithically integrated bacteriorhodopsin-semiconductor phototransceiver is demonstrated for the first time to the authors' knowledge. In this novel biophotonic optical interconnect, the input photoexcitation is detected by bacteriorhodopsin (bR) that has been selectively deposited onto the gate of a GaAs-based field-effect transistor. The photovoltage developed across the bR is converted by the transistor into an amplified photocurrent, which drives an integrated light-emitting diode with a Ga0.37Al0.63As active region. Advantage is taken of the high-input impedance of the field-effect transistor, which matches the high internal resistance of bR. The input and output wavelengths are 594 and 655 nm, respectively. The transient response of the optoelectronic circuit to modulated input light has also been studied.

  4. Numerical Model of Channel and Aquatic Habitat Response to Sediment Pulses in Mountain Rivers of Central Idaho

    NASA Astrophysics Data System (ADS)

    Lewicki, M.; Buffington, J. M.; Thurow, R. F.; Isaak, D. J.

    2006-12-01

    Mountain rivers in central Idaho receive pulsed sediment inputs from a variety of mass wasting processes (side-slope landslides, rockfalls, and tributary debris flows). Tributary debris flows and hyperconcentrated flows are particularly common due to winter "rain-on-snow" events and summer thunderstorms, the effects of which are amplified by frequent wildfire and resultant changes in vegetation, soil characteristics, and basin hydrology. Tributary confluences in the study area are commonly characterized by debris fans built by these repeated sediment pulses, providing long-term controls on channel slope, hydraulics and sediment transport capacity in the mainstem channel network. These long-term impacts are magnified during debris-flow events, which deliver additional sediment and wood debris to the fan and may block the mainstem river. These changes in physical conditions also influence local and downstream habitat for aquatic species, and can impact local human infrastructure (roads, bridges). Here, we conduct numerical simulations using a modified version of Cui's [2005] network routing model to examine bedload transport and debris-fan evolution in medium- sized watersheds (65-570 km2) of south-central Idaho. We test and calibrate the model using data from a series of postfire debris-flow events that occurred from 2003-4. We investigate model sensitivity to different controlling factors (location of the pulse within the stream network, volume of the pulse, and size distribution of the input material). We predict that on decadal time scales, sediment pulses cause a local coarsening of the channel bed in the vicinity of the sediment input, and a wave of downstream fining over several kilometers of the river (as long as the pulse material is not coarser than the stream bed itself). The grain-size distribution of the pulse influences its rate of erosion, the rate and magnitude of downstream fining, and the time required for system recovery. The effects of textural fining on spawning habitat depend on the size of sediment in the wave relative to that of the downstream channel; fining can improve spawning habitat availability in channels that are otherwise too coarse, or degrade habitat availability in finer-grained channels. Despite the perceived negative effects of sediment pulses, they can be important sources of gravel and wood debris, creating downstream spawning sites and productive wood-forced habitats. Field observations illustrate that opportunistic salmonids will spawn along the margins of recently deposited debris fans, emphasizing the biological value of such disturbances and the plasticity of salmonids to natural disturbances.

  5. Deforestation and stream warming affect body size of Amazonian fishes.

    PubMed

    Ilha, Paulo; Schiesari, Luis; Yanagawa, Fernando I; Jankowski, KathiJo; Navas, Carlos A

    2018-01-01

    Declining body size has been suggested to be a universal response of organisms to rising temperatures, manifesting at all levels of organization and in a broad range of taxa. However, no study to date evaluated whether deforestation-driven warming could trigger a similar response. We studied changes in fish body size, from individuals to assemblages, in streams in Southeastern Amazonia. We first conducted sampling surveys to validate the assumption that deforestation promoted stream warming, and to test the hypothesis that warmer deforested streams had reduced fish body sizes relative to cooler forest streams. As predicted, deforested streams were up to 6 °C warmer and had fish 36% smaller than forest streams on average. This body size reduction could be largely explained by the responses of the four most common species, which were 43-55% smaller in deforested streams. We then conducted a laboratory experiment to test the hypothesis that stream warming as measured in the field was sufficient to cause a growth reduction in the dominant fish species in the region. Fish reared at forest stream temperatures gained mass, whereas those reared at deforested stream temperatures lost mass. Our results suggest that deforestation-driven stream warming is likely to be a relevant factor promoting observed body size reductions, although other changes in stream conditions, like reductions in organic matter inputs, can also be important. A broad scale reduction in fish body size due to warming may be occurring in streams throughout the Amazonian Arc of Deforestation, with potential implications for the conservation of Amazonian fish biodiversity and food supply for people around the Basin.

  6. Deforestation and stream warming affect body size of Amazonian fishes

    PubMed Central

    Yanagawa, Fernando I.; Jankowski, KathiJo; Navas, Carlos A.

    2018-01-01

    Declining body size has been suggested to be a universal response of organisms to rising temperatures, manifesting at all levels of organization and in a broad range of taxa. However, no study to date evaluated whether deforestation-driven warming could trigger a similar response. We studied changes in fish body size, from individuals to assemblages, in streams in Southeastern Amazonia. We first conducted sampling surveys to validate the assumption that deforestation promoted stream warming, and to test the hypothesis that warmer deforested streams had reduced fish body sizes relative to cooler forest streams. As predicted, deforested streams were up to 6 °C warmer and had fish 36% smaller than forest streams on average. This body size reduction could be largely explained by the responses of the four most common species, which were 43–55% smaller in deforested streams. We then conducted a laboratory experiment to test the hypothesis that stream warming as measured in the field was sufficient to cause a growth reduction in the dominant fish species in the region. Fish reared at forest stream temperatures gained mass, whereas those reared at deforested stream temperatures lost mass. Our results suggest that deforestation-driven stream warming is likely to be a relevant factor promoting observed body size reductions, although other changes in stream conditions, like reductions in organic matter inputs, can also be important. A broad scale reduction in fish body size due to warming may be occurring in streams throughout the Amazonian Arc of Deforestation, with potential implications for the conservation of Amazonian fish biodiversity and food supply for people around the Basin. PMID:29718960

  7. VASCOMP 2. The V/STOL aircraft sizing and performance computer program. Volume 6: User's manual, revision 3

    NASA Technical Reports Server (NTRS)

    Schoen, A. H.; Rosenstein, H.; Stanzione, K.; Wisniewski, J. S.

    1980-01-01

    This report describes the use of the V/STOL Aircraft Sizing and Performance Computer Program (VASCOMP II). The program is useful in performing aircraft parametric studies in a quick and cost efficient manner. Problem formulation and data development were performed by the Boeing Vertol Company and reflects the present preliminary design technology. The computer program, written in FORTRAN IV, has a broad range of input parameters, to enable investigation of a wide variety of aircraft. User oriented features of the program include minimized input requirements, diagnostic capabilities, and various options for program flexibility.

  8. Reorganization in processing of spectral and temporal input in the rat posterior auditory field induced by environmental enrichment

    PubMed Central

    Jakkamsetti, Vikram; Chang, Kevin Q.

    2012-01-01

    Environmental enrichment induces powerful changes in the adult cerebral cortex. Studies in primary sensory cortex have observed that environmental enrichment modulates neuronal response strength, selectivity, speed of response, and synchronization to rapid sensory input. Other reports suggest that nonprimary sensory fields are more plastic than primary sensory cortex. The consequences of environmental enrichment on information processing in nonprimary sensory cortex have yet to be studied. Here we examine physiological effects of enrichment in the posterior auditory field (PAF), a field distinguished from primary auditory cortex (A1) by wider receptive fields, slower response times, and a greater preference for slowly modulated sounds. Environmental enrichment induced a significant increase in spectral and temporal selectivity in PAF. PAF neurons exhibited narrower receptive fields and responded significantly faster and for a briefer period to sounds after enrichment. Enrichment increased time-locking to rapidly successive sensory input in PAF neurons. Compared with previous enrichment studies in A1, we observe a greater magnitude of reorganization in PAF after environmental enrichment. Along with other reports observing greater reorganization in nonprimary sensory cortex, our results in PAF suggest that nonprimary fields might have a greater capacity for reorganization compared with primary fields. PMID:22131375

  9. Effect of high-pressure homogenization preparation on mean globule size and large-diameter tail of oil-in-water injectable emulsions.

    PubMed

    Peng, Jie; Dong, Wu-Jun; Li, Ling; Xu, Jia-Ming; Jin, Du-Jia; Xia, Xue-Jun; Liu, Yu-Ling

    2015-12-01

    The effect of different high pressure homogenization energy input parameters on mean diameter droplet size (MDS) and droplets with > 5 μm of lipid injectable emulsions were evaluated. All emulsions were prepared at different water bath temperatures or at different rotation speeds and rotor-stator system times, and using different homogenization pressures and numbers of high-pressure system recirculations. The MDS and polydispersity index (PI) value of the emulsions were determined using the dynamic light scattering (DLS) method, and large-diameter tail assessments were performed using the light-obscuration/single particle optical sensing (LO/SPOS) method. Using 1000 bar homogenization pressure and seven recirculations, the energy input parameters related to the rotor-stator system will not have an effect on the final particle size results. When rotor-stator system energy input parameters are fixed, homogenization pressure and recirculation will affect mean particle size and large diameter droplet. Particle size will decrease with increasing homogenization pressure from 400 bar to 1300 bar when homogenization recirculation is fixed; when the homogenization pressure is fixed at 1000 bar, the particle size of both MDS and percent of fat droplets exceeding 5 μm (PFAT 5 ) will decrease with increasing homogenization recirculations, MDS dropped to 173 nm after five cycles and maintained this level, volume-weighted PFAT 5 will drop to 0.038% after three cycles, so the "plateau" of MDS will come up later than that of PFAT 5 , and the optimal particle size is produced when both of them remained at plateau. Excess homogenization recirculation such as nine times under the 1000 bar may lead to PFAT 5 increase to 0.060% rather than a decrease; therefore, the high-pressure homogenization procedure is the key factor affecting the particle size distribution of emulsions. Varying storage conditions (4-25°C) also influenced particle size, especially the PFAT 5 . Copyright © 2015. Published by Elsevier B.V.

  10. Ultrathin thermoacoustic nanobridge loudspeakers from ALD on polyimide

    NASA Astrophysics Data System (ADS)

    Brown, J. J.; Moore, N. C.; Supekar, O. D.; Gertsch, J. C.; Bright, V. M.

    2016-11-01

    The recent development of low-temperature (<200 °C) atomic layer deposition (ALD) for fabrication of freestanding nanostructures has enabled consideration of active device design based on engineered ultrathin films. This paper explores audible sound production from thermoacoustic loudspeakers fabricated from suspended tungsten nanobridges formed by ALD. Additionally, this paper develops an approach to lumped-element modeling for design of thermoacoustic nanodevices and relates the near-field plane wave model of individual transducer beams to the far-field spherical wave sound pressure that can be measured with standard experimental techniques. Arrays of suspended nanobridges with 25.8 nm thickness and sizes as small as 17 μm × 2 μm have been fabricated and demonstrated to produce audible sound using the thermoacoustic effect. The nanobridges were fabricated by ALD of 6.5 nm Al2O3 and 19.3 nm tungsten on sacrificial polyimide, with ALD performed at 130 °C and patterned by standard photolithography. The maximum observed loudspeaker sound pressure level (SPL) is 104 dB, measured at 20 kHz, 9.71 W input power, and 1 cm measurement distance, providing a loudspeaker sensitivity value of ∼64.6 dB SPL/1 mW. Sound production efficiency was measured to vary proportional to frequency f 3 and was directly proportional to input power. The devices in this paper demonstrate industrially feasible nanofabrication of thermoacoustic transducers and a sound production mechanism pertinent to submicron-scale device engineering.

  11. Detrital Controls on Dissolved Organic Matter in Soils: A Field Experiment

    NASA Astrophysics Data System (ADS)

    Lajtha, K.; Crow, S.; Yano, Y.; Kaushal, S.; Sulzman, E.; Sollins, P.

    2004-12-01

    We established a long-term field study in an old growth coniferous forest at the H.J. Andrews Experimental Forest, OR, to address how detrital quality and quantity control soil organic matter accumulation and stabilization. The Detritus Input and Removal Treatments (DIRT) plots consist of treatments that double leaf litter, double woody debris inputs, exclude litter inputs, or remove root inputs via trenching. We measured changes in soil solution chemistry with depth, and conducted long-term incubations of bulk soils and soil density fractions from different treatments in order to elucidate effects of detrital inputs on the relative amounts and lability of different soil C pools. In the field, the effect of adding woody debris was to increase dissolved organic carbon (DOC) concentrations in O-horizon leachate and at 30 cm, but not at 100 cm, compared to control plots, suggesting increased rates of DOC retention with added woody debris. DOC concentrations decreased through the soil profile in all plots to a greater degree than did dissolved organic nitrogen (DON), most likely due to preferential sorption of high C:N hydrophobic dissolved organic matter (DOM) in upper horizons; %hydrophobic DOM decreased significantly with depth, and hydrophilic DOM had a much lower and narrower C:N ratio. Although laboratory extracts of different litter types showed differences in DOM chemistry, percent hydrophobic DOM did not differ among detrital treatments in the field, suggesting microbial equalization of DOM leachate in the field. In long-term laboratory incubations, light fraction material did not have higher rates of respiration than heavy fraction or bulk soils, suggesting that physical protection or N availability controls different turnover times of heavy fraction material, rather than differences in chemical lability. Soils from plots that had both above- and below-ground litter inputs excluded had significantly lower DOC loss rates, and a non-significant trend for lower respiration rates . Soils from plots with added wood had similar respiration and DOC loss rates as control soils, suggesting that the additional DOC sorption observed in the field in these soils was stabilized in the soil and not readily lost upon incubation.

  12. Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map

    NASA Astrophysics Data System (ADS)

    AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong

    2017-04-01

    The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by using truncated spectra or model spectra as the input. Analyses show that most of the SGS statistics agree well with those from MTLM fields with DNS spectra as the input. For the mean SGS energy dissipation, some significant deviation is observed. However, it is shown that the deviation can be parametrized by the input energy spectrum, which demonstrates the robustness of the MTLM procedure.

  13. FASOR - A second generation shell of revolution code

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1978-01-01

    An integrated computer program entitled Field Analysis of Shells of Revolution (FASOR) currently under development for NASA is described. When completed, this code will treat prebuckling, buckling, initial postbuckling and vibrations under axisymmetric static loads as well as linear response and bifurcation under asymmetric static loads. Although these modes of response are treated by existing programs, FASOR extends the class of problems treated to include general anisotropy and transverse shear deformations of stiffened laminated shells. At the same time, a primary goal is to develop a program which is free of the usual problems of modeling, numerical convergence and ill-conditioning, laborious problem setup, limitations on problem size and interpretation of output. The field method is briefly described, the shell differential equations are cast in a suitable form for solution by this method and essential aspects of the input format are presented. Numerical results are given for both unstiffened and stiffened anisotropic cylindrical shells and compared with previously published analytical solutions.

  14. Line-by-line transport calculations for Jupiter entry probes. [of radiative transfer

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.; Cooper, D. M.; Park, C.; Prakash, S. G.

    1979-01-01

    Line-by-line calculations of the radiative transport for a condition near peak heating for entry of the Galileo probe into the Jovian atmosphere are described. The discussion includes a thorough specification of the atomic and molecular input data used in the calculations that could be useful to others working in the field. The results show that the use of spectrally averaged cross sections for diatomic absorbers such as CO and C2 in the boundary layer can lead to an underestimation (by as much as 29%) of the spectral flux at the stagnation point. On the other hand, for the turbulent region near the cone frustum on the probe, the flow tends to be optically thin, and the spectrally averaged results commonly used in coupled radiative transport-flow field calculations are in good agreement with the present line-by-line results. It is recommended that these results be taken into account in sizing the final thickness of the Galileo's heat shield.

  15. Estimation of cortical magnification from positional error in normally sighted and amblyopic subjects

    PubMed Central

    Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S.; Barrett, Brendan T.; McGraw, Paul V.

    2015-01-01

    We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg−1 at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25761341

  16. Aerodynamics of high frequency flapping wings

    NASA Astrophysics Data System (ADS)

    Hu, Zheng; Roll, Jesse; Cheng, Bo; Deng, Xinyan

    2010-11-01

    We investigated the aerodynamic performance of high frequency flapping wings using a 2.5 gram robotic insect mechanism developed in our lab. The mechanism flaps up to 65Hz with a pair of man-made wing mounted with 10cm wingtip-to-wingtip span. The mean aerodynamic lift force was measured by a lever platform, and the flow velocity and vorticity were measured using a stereo DPIV system in the frontal, parasagittal, and horizontal planes. Both near field (leading edge vortex) and far field flow (induced flow) were measured with instantaneous and phase-averaged results. Systematic experiments were performed on the man-made wings, cicada and hawk moth wings due to their similar size, frequency and Reynolds number. For insect wings, we used both dry and freshly-cut wings. The aerodynamic force increase with flapping frequency and the man-made wing generates more than 4 grams of lift at 35Hz with 3 volt input. Here we present the experimental results and the major differences in their aerodynamic performances.

  17. Forebrain pathway for auditory space processing in the barn owl.

    PubMed

    Cohen, Y E; Miller, G L; Knudsen, E I

    1998-02-01

    The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.

  18. A practical and theoretical definition of very small field size for radiotherapy output factor measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, P. H., E-mail: p.charles@qut.edu.au; Crowe, S. B.; Langton, C. M.

    Purpose: This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods: A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated intomore » additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom, and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 to 100 mm, using a nominal photon energy of 6 MV. Results: According to the practical definition established in this project, field sizes ≤15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0% to 2.0%, or field size uncertainties are 0.5 mm, field sizes ≤12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes ≤12 mm. Source occlusion also caused a large change in OPF for field sizes ≤8 mm. Based on the results of this study, field sizes ≤12 mm were considered to be theoretically very small for 6 MV beams. Conclusions: Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least ≤12 mm and more conservatively≤15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.« less

  19. A practical and theoretical definition of very small field size for radiotherapy output factor measurements.

    PubMed

    Charles, P H; Cranmer-Sargison, G; Thwaites, D I; Crowe, S B; Kairn, T; Knight, R T; Kenny, J; Langton, C M; Trapp, J V

    2014-04-01

    This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom, and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 to 100 mm, using a nominal photon energy of 6 MV. According to the practical definition established in this project, field sizes ≤ 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0% to 2.0%, or field size uncertainties are 0.5 mm, field sizes ≤ 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes ≤ 12 mm. Source occlusion also caused a large change in OPF for field sizes ≤ 8 mm. Based on the results of this study, field sizes ≤ 12 mm were considered to be theoretically very small for 6 MV beams. Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least ≤ 12 mm and more conservatively ≤ 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection. © 2014 American Association of Physicists in Medicine.

  20. Directional hearing by linear summation of binaural inputs at the medial superior olive

    PubMed Central

    van der Heijden, Marcel; Lorteije, Jeannette A. M.; Plauška, Andrius; Roberts, Michael T.; Golding, Nace L.; Borst, J. Gerard G.

    2013-01-01

    SUMMARY Neurons in the medial superior olive (MSO) enable sound localization by their remarkable sensitivity to submillisecond interaural time differences (ITDs). Each MSO neuron has its own “best ITD” to which it responds optimally. A difference in physical path length of the excitatory inputs from both ears cannot fully account for the ITD tuning of MSO neurons. As a result, it is still debated how these inputs interact and whether the segregation of inputs to opposite dendrites, well-timed synaptic inhibition, or asymmetries in synaptic potentials or cellular morphology further optimize coincidence detection or ITD tuning. Using in vivo whole-cell and juxtacellular recordings, we show here that ITD tuning of MSO neurons is determined by the timing of their excitatory inputs. The inputs from both ears sum linearly, whereas spike probability depends nonlinearly on the size of synaptic inputs. This simple coincidence detection scheme thus makes accurate sound localization possible. PMID:23764292

  1. Understanding the Thermodynamics of Biological Order

    ERIC Educational Resources Information Center

    Peterson, Jacob

    2012-01-01

    By growth in size and complexity (i.e., changing from more probable to less probable states), plants and animals appear to defy the second law of thermodynamics. The usual explanation describes the input of nutrient and sunlight energy into open thermodynamic systems. However, energy input alone does not address the ability to organize and create…

  2. The Role of Learner and Input Variables in Learning Inflectional Morphology

    ERIC Educational Resources Information Center

    Brooks, Patricia J.; Kempe, Vera; Sionov, Ariel

    2006-01-01

    To examine effects of input and learner characteristics on morphology acquisition, 60 adult English speakers learned to inflect masculine and feminine Russian nouns in nominative, dative, and genitive cases. By varying training vocabulary size (i.e., type variability), holding constant the number of learning trials, we tested whether learners…

  3. Mercury budget of an upland-peatland watershed

    Treesearch

    D. F. Grigal; Randy K. Kolka; J. A. Fleck; E. A. Nater

    2000-01-01

    Inputs, outputs, and pool sizes of total mercury (Hg) were measured in a forested 10 ha,watershed consisting of a 7 ha hardwood-dominated upland surrounding a 3 ha coniferdominated peatland. Hydrologic inputs via throughfall and stemflow, 13 +/- 0.4/ug m-2 yr-1 over the entire watershed, were about double precipitation...

  4. Propagation of economic shocks in input-output networks: A cross-country analysis

    NASA Astrophysics Data System (ADS)

    Contreras, Martha G. Alatriste; Fagiolo, Giorgio

    2014-12-01

    This paper investigates how economic shocks propagate and amplify through the input-output network connecting industrial sectors in developed economies. We study alternative models of diffusion on networks and we calibrate them using input-output data on real-world inter-sectoral dependencies for several European countries before the Great Depression. We show that the impact of economic shocks strongly depends on the nature of the shock and country size. Shocks that impact on final demand without changing production and the technological relationships between sectors have on average a large but very homogeneous impact on the economy. Conversely, when shocks change also the magnitudes of input-output across-sector interdependencies (and possibly sector production), the economy is subject to predominantly large but more heterogeneous avalanche sizes. In this case, we also find that (i) the more a sector is globally central in the country network, the larger its impact; (ii) the largest European countries, such as those constituting the core of the European Union's economy, typically experience the largest avalanches, signaling their intrinsic higher vulnerability to economic shocks.

  5. Identification of differences in health impact modelling of salt reduction

    PubMed Central

    Geleijnse, Johanna M.; van Raaij, Joop M. A.; Cappuccio, Francesco P.; Cobiac, Linda C.; Scarborough, Peter; Nusselder, Wilma J.; Jaccard, Abbygail; Boshuizen, Hendriek C.

    2017-01-01

    We examined whether specific input data and assumptions explain outcome differences in otherwise comparable health impact assessment models. Seven population health models estimating the impact of salt reduction on morbidity and mortality in western populations were compared on four sets of key features, their underlying assumptions and input data. Next, assumptions and input data were varied one by one in a default approach (the DYNAMO-HIA model) to examine how it influences the estimated health impact. Major differences in outcome were related to the size and shape of the dose-response relation between salt and blood pressure and blood pressure and disease. Modifying the effect sizes in the salt to health association resulted in the largest change in health impact estimates (33% lower), whereas other changes had less influence. Differences in health impact assessment model structure and input data may affect the health impact estimate. Therefore, clearly defined assumptions and transparent reporting for different models is crucial. However, the estimated impact of salt reduction was substantial in all of the models used, emphasizing the need for public health actions. PMID:29182636

  6. Development of a fast and feasible spectrum modeling technique for flattening filter free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Woong; Bush, Karl; Mok, Ed

    Purpose: To develop a fast and robust technique for the determination of optimized photon spectra for flattening filter free (FFF) beams to be applied in convolution/superposition dose calculations. Methods: A two-step optimization method was developed to derive optimal photon spectra for FFF beams. In the first step, a simple functional form of the photon spectra proposed by Ali ['Functional forms for photon spectra of clinical linacs,' Phys. Med. Biol. 57, 31-50 (2011)] is used to determine generalized shapes of the photon spectra. In this method, the photon spectra were defined for the ranges of field sizes to consider the variationsmore » of the contributions of scattered photons with field size. Percent depth doses (PDDs) for each field size were measured and calculated to define a cost function, and a collapsed cone convolution (CCC) algorithm was used to calculate the PDDs. In the second step, the generalized functional form of the photon spectra was fine-tuned in a process whereby the weights of photon fluence became the optimizing free parameters. A line search method was used for the optimization and first order derivatives with respect to the optimizing parameters were derived from the CCC algorithm to enhance the speed of the optimization. The derived photon spectra were evaluated, and the dose distributions using the optimized spectra were validated. Results: The optimal spectra demonstrate small variations with field size for the 6 MV FFF beam and relatively large variations for the 10 MV FFF beam. The mean energies of the optimized 6 MV FFF spectra were decreased from 1.31 MeV for a 3 Multiplication-Sign 3 cm{sup 2} field to 1.21 MeV for a 40 Multiplication-Sign 40 cm{sup 2} field, and from 2.33 MeV at 3 Multiplication-Sign 3 cm{sup 2} to 2.18 MeV at 40 Multiplication-Sign 40 cm{sup 2} for the 10 MV FFF beam. The developed method could significantly improve the agreement between the calculated and measured PDDs. Root mean square differences on the optimized PDDs were observed to be 0.41% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.21% (40 Multiplication-Sign 40 cm{sup 2}) for the 6 MV FFF beam, and 0.35% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.29% (40 Multiplication-Sign 40 cm{sup 2}) for the 10 MV FFF beam. The first order derivatives from the functional form were found to improve the speed of computational time up to 20 times compared to the other techniques. Conclusions: The derived photon spectra resulted in good agreements with measured PDDs over the range of field sizes investigated. The suggested method is easily applicable to commercial radiation treatment planning systems since it only requires measured PDDs as input.« less

  7. Long term continuous field survey to assess nutrient emission impact from irrigated paddy field into river catchment

    NASA Astrophysics Data System (ADS)

    Kogure, Kanami; Aichi, Masaatsu; Zessner, Matthias

    2017-04-01

    In order to achieve good river environment, it is very important to understand and to control nutrient behavior such as Nitrogen and Phosphorus. As we could reduce impact from urban and industrial activities by wastewater treatment, pollution from point sources are likely to be controlled. Besides them, nutrient emission from agricultural activity is dominant pollution source into the river system. In many countries in Asia and Africa, rice is widely cultivated and paddy field covers large areas. In Japan 54% of its arable land is occupied with irrigated paddy field. While paddy field can deteriorate river water quality due to fertilization, it is also suggested that paddy field can purify water. We carried out field survey in middle reach of the Tone River Basin with focus on a paddy field IM. The objectives of the research are 1) understanding of water and nutrient balance in paddy field, 2) data collection for assessing nutrient emission. Field survey was conducted from June 2015 to October 2016 covering two flooding seasons in summer. In our measurement, all input and output were measured regarding water, N and P to quantify water and nutrient balance in the paddy field. By measuring water quality and flow rate of inflow, outflow, infiltrating water, ground water and flooding water, we tried to quantitatively understand water, N and P cycle in a paddy field including seasonal trends, and changes accompanied with rainy events and agricultural activities like fertilization. Concerning water balance, infiltration rate was estimated by following equation. Infiltration=Irrigation water + Precipitation - Evapotranspiration -Outflow We estimated mean daily water balance during flooding season. Infiltration is 11.9mm/day in our estimation for summer in 2015. Daily water reduction depth (WRD) is sum of Evapotranspiration and Infiltration. WRD is 21.5mm/day in IM and agrees with average value in previous research. Regarding nutrient balance, we estimated an annual N and P balance. N and P surplus are calculated by difference between input and output in a paddy field. As to nutrient balance in 2015 surplus shows minus value between input as fertilizer and output as rice product. However, by taking account of input via irrigation water as nutrient source, N and P input and output balance with errors by 9% and 14%. Results of long term continuous survey suggest that irrigation water is one of nutrient sources in rice cultivation.

  8. Soft and wet actuator developed with responsible high-strength gels

    NASA Astrophysics Data System (ADS)

    Harada, S.; Hidema, R.; Furukawa, H.

    2012-04-01

    Novel high-strength gels, named double network gels (DN gels), show a smart response to altering external electric field. It was reported that a plate shape of the DN gel bends toward a positive electrode direction when a static (DC) electric field is applied. Based on this previous result, it has been tried to develop a novel soft and wet actuator, which will be used as an automatically bulging button for cellar phones, or similar small devices. First, a bending experiment of a hung plate-shape DN gel was done, and its electric field response was confirmed. Second, the response of a lying plate-shape DN gels was confirmed in order to check the bulging phenomena. The edge of three plate-shape gels that was arranged radially on a plane surface was lifted 2mm by applying DC 8V. This system is a first step to make a gels button. However the critical problem is that electrolysis occurs simultaneously under electric field. Then, the water sweep out from gels, and gels is shrinking; They cause the separation between aluminum foil working as electrode and gels. That is why, a flexible electrode should be made by gels completely attached to the gels. As a third step, a push button is tried to make by a shape memory gels (SMG). The Young's modulus of the SMG is dramatically changed by temperature. This change in the modulus is applied to control the input-acceptable state and input-not-acceptable states of the button. A novel push button is proposed as a trial, and its user-friendliness is checked by changing the size of the button. The button is deformed by pushing and is back to original shape due to the property of shape memory. We believe the mechanism of this button will be applied to develop new devices especially for visually impaired persons.

  9. MMOC- MODIFIED METHOD OF CHARACTERISTICS SONIC BOOM EXTRAPOLATION

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1994-01-01

    The Modified Method of Characteristics Sonic Boom Extrapolation program (MMOC) is a sonic boom propagation method which includes shock coalescence and incorporates the effects of asymmetry due to volume and lift. MMOC numerically integrates nonlinear equations from data at a finite distance from an airplane configuration at flight altitude to yield the sonic boom pressure signature at ground level. MMOC accounts for variations in entropy, enthalpy, and gravity for nonlinear effects near the aircraft, allowing extrapolation to begin nearer the body than in previous methods. This feature permits wind tunnel sonic boom models of up to three feet in length, enabling more detailed, realistic models than the previous six-inch sizes. It has been shown that elongated airplanes flying at high altitude and high Mach numbers can produce an acceptably low sonic boom. Shock coalescence in MMOC includes three-dimensional effects. The method is based on an axisymmetric solution with asymmetric effects determined by circumferential derivatives of the standard shock equations. Bow shocks and embedded shocks can be included in the near-field. The method of characteristics approach in MMOC allows large computational steps in the radial direction without loss of accuracy. MMOC is a propagation method rather than a predictive program. Thus input data (the flow field on a cylindrical surface at approximately one body length from the axis) must be supplied from calculations or experimental results. The MMOC package contains a uniform atmosphere pressure field program and interpolation routines for computing the required flow field data. Other user supplied input to MMOC includes Mach number, flow angles, and temperature. MMOC output tabulates locations of bow shocks and embedded shocks. When the calculations reach ground level, the overpressure and distance are printed, allowing the user to plot the pressure signature. MMOC is written in FORTRAN IV for batch execution and has been implemented on a CDC 170 series computer operating under NOS with a central memory requirement of approximately 223K of 60 bit words. This program was developed in 1983.

  10. Predicting cloud-to-ground lightning with neural networks

    NASA Technical Reports Server (NTRS)

    Barnes, Arnold A., Jr.; Frankel, Donald; Draper, James Stark

    1991-01-01

    A neural network is being trained to predict lightning at Cape Canaveral for periods up to two hours in advance. Inputs consist of ground based field mill data, meteorological tower data, lightning location data, and radiosonde data. High values of the field mill data and rapid changes in the field mill data, offset in time, provide the forecasts or desired output values used to train the neural network through backpropagation. Examples of input data are shown and an example of data compression using a hidden layer in the neural network is discussed.

  11. RF rectifiers for EM power harvesting in a Deep Brain Stimulating device.

    PubMed

    Hosain, Md Kamal; Kouzani, Abbas Z; Tye, Susannah; Kaynak, Akif; Berk, Michael

    2015-03-01

    A passive deep brain stimulation (DBS) device can be equipped with a rectenna, consisting of an antenna and a rectifier, to harvest energy from electromagnetic fields for its operation. This paper presents optimization of radio frequency rectifier circuits for wireless energy harvesting in a passive head-mountable DBS device. The aim is to achieve a compact size, high conversion efficiency, and high output voltage rectifier. Four different rectifiers based on the Delon doubler, Greinacher voltage tripler, Delon voltage quadrupler, and 2-stage charge pumped architectures are designed, simulated, fabricated, and evaluated. The design and simulation are conducted using Agilent Genesys at operating frequency of 915 MHz. A dielectric substrate of FR-4 with thickness of 1.6 mm, and surface mount devices (SMD) components are used to fabricate the designed rectifiers. The performance of the fabricated rectifiers is evaluated using a 915 MHz radio frequency (RF) energy source. The maximum measured conversion efficiency of the Delon doubler, Greinacher tripler, Delon quadrupler, and 2-stage charge pumped rectifiers are 78, 75, 73, and 76 % at -5 dBm input power and for load resistances of 5-15 kΩ. The conversion efficiency of the rectifiers decreases significantly with the increase in the input power level. The Delon doubler rectifier provides the highest efficiency at both -5 and 5 dBm input power levels, whereas the Delon quadrupler rectifier gives the lowest efficiency for the same inputs. By considering both efficiency and DC output voltage, the charge pump rectifier outperforms the other three rectifiers. Accordingly, the optimised 2-stage charge pumped rectifier is used together with an antenna to harvest energy in our DBS device.

  12. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  13. Distinct findings from the steady-state analysis of a microbial model with time-invariant or seasonal driving forces

    NASA Astrophysics Data System (ADS)

    Wang, G.; Mayes, M. A.

    2017-12-01

    Microbially-explicit soil organic matter (SOM) decomposition models are thought to be more biologically realistic than conventional models. Current testing or evaluation of microbial models majorly uses steady-state analysis with time-invariant forces (i.e., soil temperature, moisture and litter input). The findings from such simplified analyses are assumed to be capable of representing the model responses in field soil conditions with seasonal driving forces. Here we show that the steady-state modeling results with seasonal forces may result in distinct findings from the simulations with time-invariant forcing data. We evaluate the response of soil organic C (SOC) to litter addition (L+) in a subtropical pine forest using the calibrated Microbial-ENzyme Decomposition (MEND) model. We implemented two sets of modeling analyses, with each set including two scenarios, i.e., control (CR) vs. litter-addition (L+). The first set (Set1) uses fixed soil temperature and moisture, and constant litter input under Scenario CR vs. increased constant litter input under Scenario L+. The second set (Set2) employs hourly soil temperature and moisture and monthly litter input under Scenario CR. Under Scenario L+ of Set2, A logistic function with an upper plateau represents the increasing trend of litter input to SOM. We conduct long-term simulations to ensure that the models reach steady-states for Set1 or dynamic equilibrium for Set2. Litter addition of Set2 causes an increase of SOC by 29%. However, the steady-state SOC pool sizes of Set1 would not respond to L+ as long as the chemical composition of litter remained the same. Our results indicate the necessity to implement dynamic model simulations with seasonal forcing data, which could lead to modeling results qualitatively different from the steady-state analysis with time-invariant forcing data.

  14. A molecular-sized optical logic circuit for digital modulation of a fluorescence signal

    NASA Astrophysics Data System (ADS)

    Nishimura, Takahiro; Tsuchida, Karin; Ogura, Yusuke; Tanida, Jun

    2018-03-01

    Fluorescence measurement allows simultaneous detection of multiple molecular species by using spectrally distinct fluorescence probes. However, due to the broad spectra of fluorescence emission, the multiplicity of fluorescence measurement is generally limited. To overcome this limitation, we propose a method to digitally modulate fluorescence output signals with a molecular-sized optical logic circuit by using optical control of fluorescence resonance energy transfer (FRET). The circuit receives a set of optical inputs represented with different light wavelengths, and then it switches high and low fluorescence intensity from a reporting molecule according to the result of the logic operation. By using combinational optical inputs in readout of fluorescence signals, the number of biomolecular species that can be identified is increased. To implement the FRET-based circuits, we designed two types of basic elements, YES and NOT switches. An YES switch produces a high-level output intensity when receiving a designated light wavelength input and a low-level intensity without the light irradiation. A NOT switch operates inversely to the YES switch. In experiments, we investigated the operation of the YES and NOT switches that receive a 532-nm light input and modulate the fluorescence intensity of Alexa Fluor 488. The experimental result demonstrates that the switches can modulate fluorescence signals according to the optical input.

  15. Learning receptive fields using predictive feedback.

    PubMed

    Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H

    2006-01-01

    Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.

  16. Neural Classifiers for Learning Higher-Order Correlations

    NASA Astrophysics Data System (ADS)

    Güler, Marifi

    1999-01-01

    Studies by various authors suggest that higher-order networks can be more powerful and are biologically more plausible with respect to the more traditional multilayer networks. These architectures make explicit use of nonlinear interactions between input variables in the form of higher-order units or product units. If it is known a priori that the problem to be implemented possesses a given set of invariances like in the translation, rotation, and scale invariant pattern recognition problems, those invariances can be encoded, thus eliminating all higher-order terms which are incompatible with the invariances. In general, however, it is a serious set-back that the complexity of learning increases exponentially with the size of inputs. This paper reviews higher-order networks and introduces an implicit representation in which learning complexity is mainly decided by the number of higher-order terms to be learned and increases only linearly with the input size.

  17. Neurophysiological model of the normal and abnormal human pupil

    NASA Technical Reports Server (NTRS)

    Krenz, W.; Robin, M.; Barez, S.; Stark, L.

    1985-01-01

    Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.

  18. Influence of stimulus size on revealing non-cardinal color mechanisms.

    PubMed

    Gunther, Karen L; Downey, Colin O

    2016-10-01

    Multiple studies have shown that performance of subjects on a number of visual tasks is worse for non-cardinal than cardinal colors, especially in the red-green/luminance (RG/LUM) and tritan/luminance (TRIT/LUM) color planes. Inspired by neurophysiological evidence that suppressive surround input to receptive fields is particularly sensitive to luminance, we hypothesized that non-cardinal mechanisms in the RG/LUM and TRIT/LUM planes would be more sensitive to stimulus size than are isoluminant non-cardinal mechanisms. In Experiment 1 we tested 9-10 color-normal subjects in each of the three color planes (RG/TRIT, RG/LUM, and TRIT/LUM) on visual search at four bull's-eye dot sizes (0.5°/1°, 1°/2°, 2°/4°, and 3°/6° center/annulus dot diameter). This study yielded a significant main effect of dot size in each of the three color planes. In Experiment 2 we tested the same hypothesis using noise masking, at three stimulus sizes (3°, 6° and 9° diameter Gabors), again in all three color planes (5 subjects per color plane). This experiment yielded, in the RG/TRIT plane, a significant main effect of stimulus size; in the RG/LUM plane, significant evidence for non-cardinal mechanisms only for the 9° stimulus; but in the TRIT/LUM plane no evidence for non-cardinal mechanisms at any stimulus size. These results suggest that non-cardinal mechanisms, particularly in the RG/LUM color plane, are more sensitive to stimulus size than are non-cardinals in the RG/TRIT plane, supporting our hypothesis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A Method Based on Wavelet Transforms for Source Detection in Photon-counting Detector Images. II. Application to ROSAT PSPC Images

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1997-07-01

    We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.

  20. Bedtime Stories in English: Field-Testing Comprehensible Input Materials for Natural Second-Language Acquisition in Japanese Pre-School Children

    ERIC Educational Resources Information Center

    Hamilton, Robert

    2014-01-01

    In this study, the prototype of a new type of bilingual picture book was field-tested with two sets of mother-son subject pairs. This picture book was designed as a possible tool for providing children with comprehensible input during their critical period for second language acquisition. Context is provided by visual cues and both Japanese and…

  1. Parallel momentum input by tangential neutral beam injections in stellarator and heliotron plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, S., E-mail: nishimura.shin@lhd.nifs.ac.jp; Nakamura, Y.; Nishioka, K.

    The configuration dependence of parallel momentum inputs to target plasma particle species by tangentially injected neutral beams is investigated in non-axisymmetric stellarator/heliotron model magnetic fields by assuming the existence of magnetic flux-surfaces. In parallel friction integrals of the full Rosenbluth-MacDonald-Judd collision operator in thermal particles' kinetic equations, numerically obtained eigenfunctions are used for excluding trapped fast ions that cannot contribute to the friction integrals. It is found that the momentum inputs to thermal ions strongly depend on magnetic field strength modulations on the flux-surfaces, while the input to electrons is insensitive to the modulation. In future plasma flow studies requiringmore » flow calculations of all particle species in more general non-symmetric toroidal configurations, the eigenfunction method investigated here will be useful.« less

  2. Modelling the Cast Component Weight in Hot Chamber Die Casting using Combined Taguchi and Buckingham's π Approach

    NASA Astrophysics Data System (ADS)

    Singh, Rupinder

    2018-02-01

    Hot chamber (HC) die casting process is one of the most widely used commercial processes for the casting of low temperature metals and alloys. This process gives near-net shape product with high dimensional accuracy. However in actual field environment the best settings of input parameters is often conflicting as the shape and size of the casting changes and one have to trade off among various output parameters like hardness, dimensional accuracy, casting defects, microstructure etc. So for online inspection of the cast components properties (without affecting the production line) the weight measurement has been established as one of the cost effective method (as the difference in weight of sound and unsound casting reflects the possible casting defects) in field environment. In the present work at first stage the effect of three input process parameters (namely: pressure at 2nd phase in HC die casting; metal pouring temperature and die opening time) has been studied for optimizing the cast component weight `W' as output parameter in form of macro model based upon Taguchi L9 OA. After this Buckingham's π approach has been applied on Taguchi based macro model for the development of micro model. This study highlights the Taguchi-Buckingham based combined approach as a case study (for conversion of macro model into micro model) by identification of optimum levels of input parameters (based on Taguchi approach) and development of mathematical model (based on Buckingham's π approach). Finally developed mathematical model can be used for predicting W in HC die casting process with more flexibility. The results of study highlights second degree polynomial equation for predicting cast component weight in HC die casting and suggest that pressure at 2nd stage is one of the most contributing factors for controlling the casting defect/weight of casting.

  3. Identification and modification of dominant noise sources in diesel engines

    NASA Astrophysics Data System (ADS)

    Hayward, Michael D.

    Determination of dominant noise sources in diesel engines is an integral step in the creation of quiet engines, but is a process which can involve an extensive series of expensive, time-consuming fired and motored tests. The goal of this research is to determine dominant noise source characteristics of a diesel engine in the near and far-fields with data from fewer tests than is currently required. Pre-conditioning and use of numerically robust methods to solve a set of cross-spectral density equations results in accurate calculation of the transfer paths between the near- and far-field measurement points. Application of singular value decomposition to an input cross-spectral matrix determines the spectral characteristics of a set of independent virtual sources, that, when scaled and added, result in the input cross spectral matrix. Each virtual source power spectral density is a singular value resulting from the decomposition performed over a range of frequencies. The complex relationship between virtual and physical sources is estimated through determination of virtual source contributions to each input measurement power spectral density. The method is made more user-friendly through use of a percentage contribution color plotting technique, where different normalizations can be used to help determine the presence of sources and the strengths of their contributions. Convolution of input measurements with the estimated path impulse responses results in a set of far-field components, to which the same singular value contribution plotting technique can be applied, thus allowing dominant noise source characteristics in the far-field to also be examined. Application of the methods presented results in determination of the spectral characteristics of dominant noise sources both in the near- and far-fields from one fired test, which significantly reduces the need for extensive fired and motored testing. Finally, it is shown that the far-field noise time history of a physically altered engine can be simulated through modification of singular values and recalculation of transfer paths between input and output measurements of previously recorded data.

  4. Toward a better integration of roughness in rockfall simulations - a sensitivity study with the RockyFor3D model

    NASA Astrophysics Data System (ADS)

    Monnet, Jean-Matthieu; Bourrier, Franck; Milenkovic, Milutin

    2017-04-01

    Advances in numerical simulation and analysis of real-size field experiments have supported the development of process-based rockfall simulation models. Availability of high resolution remote sensing data and high-performance computing now make it possible to implement them for operational applications, e.g. risk zoning and protection structure design. One key parameter regarding rock propagation is the surface roughness, sometimes defined as the variation in height perpendicular to the slope (Pfeiffer and Bowen, 1989). Roughness-related input parameters for rockfall models are usually determined by experts on the field. In the RockyFor3D model (Dorren, 2015), three values related to the distribution of obstacles (deposited rocks, stumps, fallen trees,... as seen from the incoming rock) relatively to the average slope are estimated. The use of high resolution digital terrain models (DTMs) questions both the scale usually adopted by experts for roughness assessment and the relevance of modeling hypotheses regarding the rock / ground interaction. Indeed, experts interpret the surrounding terrain as obstacles or ground depending on the overall visibility and on the nature of objects. Digital models represent the terrain with a certain amount of smoothing, depending on the sensor capacities. Besides, the rock rebound on the ground is modeled by changes in the velocities of the gravity center of the block due to impact. Thus, the use of a DTM with resolution smaller than the block size might have little relevance while increasing computational burden. The objective of this work is to investigate the issue of scale relevance with simulations based on RockyFor3D in order to derive guidelines for roughness estimation by field experts. First a sensitivity analysis is performed to identify the combinations of parameters (slope, soil roughness parameter, rock size) where the roughness values have a critical effect on rock propagation on a regular hillside. Second, a more complex hillside is simulated by combining three components: a) a global trend (planar surface), b) local systematic components (sine waves), c) random roughness (Gaussian, zero-mean noise). The parameters for simulating these components are estimated for three typical scenarios of rockfall terrains: soft soil, fine scree and coarse scree, based on expert knowledge and available airborne and terrestrial laser scanning data. For each scenario, the reference terrain is created and used to compute input data for RockyFor3D simulations at different scales, i.e. DTMs with resolutions from 0.5 m to 20 m and associated roughness parameters. Subsequent analysis mainly focuses on the sensitivity of simulations both in terms of run-out envelope and kinetic energy distribution. Guidelines drawn from the results are expected to help experts handle the scale issue while integrating remote sensing data and field measurements of roughness in rockfall simulations.

  5. Preliminary Phase Field Computational Model Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yulan; Hu, Shenyang Y.; Xu, Ke

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus ofmore » the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in experiments, special experimental methods were devised to create similar boundary conditions in the iron films. Preliminary MFM studies conducted on single and polycrystalline iron films with small sub-areas created with focused ion beam have correlated quite well qualitatively with phase-field simulations. However, phase-field model dimensions are still small relative to experiments thus far. We are in the process of increasing the size of the models and decreasing specimen size so both have identical dimensions. Ongoing research is focused on validation of the phase-field model. Validation is being accomplished through comparison with experimentally obtained MFM images (in progress), and planned measurements of major hysteresis loops and first order reversal curves. Extrapolation of simulation sizes to represent a more stochastic bulk-like system will require sampling of various simulations (i.e., with single non-magnetic defect, single magnetic defect, single grain boundary, single dislocation, etc.) with distributions of input parameters. These outputs can then be compared to laboratory magnetic measurements and ultimately to simulate magnetic Barkhausen noise signals.« less

  6. Localized direction selective responses in the dendrites of visual interneurons of the fly

    PubMed Central

    2010-01-01

    Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983

  7. Full wave modulator-demodulator amplifier apparatus. [for generating rectified output signal

    NASA Technical Reports Server (NTRS)

    Black, J. M. (Inventor)

    1974-01-01

    A full-wave modulator-demodulator apparatus is described including an operational amplifier having a first input terminal coupled to a circuit input terminal, and a second input terminal alternately coupled to the circuit input terminal. A circuit is ground by a switching circuit responsive to a phase reference signal and the operational amplifier is alternately switched between a non-inverting mode and an inverting mode. The switching circuit includes three field-effect transistors operatively associated to provide the desired switching function in response to an alternating reference signal of the same frequency as an AC input signal applied to the circuit input terminal.

  8. Wastewater Treatment Effluent Reduces the Abundance and Diversity of Benthic Bacterial Communities in Urban and Suburban Rivers

    PubMed Central

    Drury, Bradley; Rosi-Marshall, Emma

    2013-01-01

    In highly urbanized areas, wastewater treatment plant (WWTP) effluent can represent a significant component of freshwater ecosystems. As it is impossible for the composition of WWTP effluent to match the composition of the receiving system, the potential exists for effluent to significantly impact the chemical and biological characteristics of the receiving ecosystem. We assessed the impacts of WWTP effluent on the size, activity, and composition of benthic microbial communities by comparing two distinct field sites in the Chicago metropolitan region: a highly urbanized river receiving effluent from a large WWTP and a suburban river receiving effluent from a much smaller WWTP. At sites upstream of effluent input, the urban and suburban rivers differed significantly in chemical characteristics and in the composition of their sediment bacterial communities. Although effluent resulted in significant increases in inorganic nutrients in both rivers, surprisingly, it also resulted in significant decreases in the population size and diversity of sediment bacterial communities. Tag pyrosequencing of bacterial 16S rRNA genes revealed significant effects of effluent on sediment bacterial community composition in both rivers, including decreases in abundances of Deltaproteobacteria, Desulfococcus, Dechloromonas, and Chloroflexi sequences and increases in abundances of Nitrospirae and Sphingobacteriales sequences. The overall effect of the WWTP inputs was that the two rivers, which were distinct in chemical and biological properties upstream of the WWTPs, were almost indistinguishable downstream. These results suggest that WWTP effluent has the potential to reduce the natural variability that exists among river ecosystems and indicate that WWTP effluent may contribute to biotic homogenization. PMID:23315724

  9. Geologic input to enhanced oil recovery project planning in south Oman

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, N.L.; Ellis, D.; Heward, A.P.

    1986-05-01

    South Oman clastic reservoirs contain a combined stock-tank oil in place of more than 1.9 billion m/sup 3/ of predominantly heavy oil distributed in almost 40 fields of varying size. Successful early application of such enhanced oil recovery (EOR) methods as steam flood, polymer drive, and steam soak could realize undiscounted incremental recoveries of 244 million m/sup 3/ of oil. Target oil is contained in three reservoir intervals with distinct characteristics relevant to EOR. (1) The Cambrian-Ordovician Haima Group is a thick monotonous sequence of continental and coastal sands; major problems are steam-rock reactions, recovery factors, effective kv/kh (ratio ofmore » vertical to horizontal permeability), and aquifer strength. (2) The Permian-Carboniferous Al Khlata Formation is a glacial package showing severe heterogeneity, strong permeability anisotropy, and poor predictability. (3) The Permian Gharif Formation is a coastal to fluvial sequence with isolated and multilayer channel sands, smectitic clays, and anomalous primary production performance. Several EOR pilot projects are either ongoing or in preparation as part of a longer term EOR strategy. Geologic input is important at four essential stages of pilot planning: initial project ranking, optimization of pilot location, definition of pilot size, and predictive/history match simulations. Each stage is illustrated using a specific project example from south Oman to show the diverse geologic and logistic problems of the area. Although geologic aspects are highlighted, EOR project planning in south Oman is multidisciplinary, with integration being aided by a dedicated EOR coordination department.« less

  10. Pulsatile Flow and Gas Transport of Blood over an Array of Cylinders

    NASA Astrophysics Data System (ADS)

    Chan, Kit Yan

    2005-11-01

    In the artificial lung, blood passes through an array of micro-fibers and the gas transfer is strongly dependent on the flow field. The blood flow is unsteady and pulsatile. We have numerically simulated pulsatile flow and gas transfer of blood (modeled as a Casson fluid) over arrays of cylindrical micro-fibers. Oxygen and carbon dioxide are assumed to be in local equilibrium with hemoglobin in blood; and the carbon dioxide facilitated oxygen transport is incorporated into the model by allowing the coupling of carbon dioxide partial pressure and oxygen saturation. The pulsatile flow inputs considered are the sinusoidal and the cardiac waveforms. The squared and staggered arrays of arrangement of the cylinders are considered in this study. Gas transport can be enhanced by: increasing the oscillation frequency; increasing the Reynolds number; increasing the oscillation amplitude; decreasing the void fraction; the use of the cardiac pulsatile input. The overall gas transport is greatly enhanced by the presence of hemoglobin in blood even though the non-Newtonian effect of blood tends to decrease the size and strength of vortices. The pressure drop is also presented as it is an important design parameter confronting the heart.

  11. Study of Solid Particle Behavior in High Temperature Gas Flows

    NASA Astrophysics Data System (ADS)

    Majid, A.; Bauder, U.; Stindl, T.; Fertig, M.; Herdrich, G.; Röser, H.-P.

    2009-01-01

    The Euler-Lagrangian approach is used for the simulation of solid particles in hypersonic entry flows. For flow field simulation, the program SINA (Sequential Iterative Non-equilibrium Algorithm) developed at the Institut für Raumfahrtsysteme is used. The model for the effect of the carrier gas on a particle includes drag force and particle heating only. Other parameters like lift Magnus force or damping torque are not taken into account so far. The reverse effect of the particle phase on the gaseous phase is currently neglected. Parametric analysis is done regarding the impact of variation in the physical input conditions like position, velocity, size and material of the particle. Convective heat fluxes onto the surface of the particle and its radiative cooling are discussed. The variation of particle temperature under different conditions is presented. The influence of various input conditions on the trajectory is explained. A semi empirical model for the particle wall interaction is also discussed and the influence of the wall on the particle trajectory with different particle conditions is presented. The heat fluxes onto the wall due to impingement of particles are also computed and compared with the heat fluxes from the gas.

  12. On the application of hybrid meshes in hydraulic machinery CFD simulations

    NASA Astrophysics Data System (ADS)

    Schlipf, M.; Tismer, A.; Riedelbauch, S.

    2016-11-01

    The application of two different hybrid mesh types for the simulation of a Francis runner for automated optimization processes without user input is investigated. Those mesh types are applied to simplified test cases such as flow around NACA airfoils to identify the special mesh resolution effects with reduced complexity, like rotating cascade flows, as they occur in a turbomachine runner channel. The analysis includes the application of those different meshes on the geometries by keeping defined quality criteria and exploring the influences on the simulation results. All results are compared with reference values gained by simulations with blockstructured hexahedron meshes and the same numerical scheme. This avoids additional inaccuracies caused by further numerical and experimental measurement methods. The results show that a simulation with hybrid meshes built up by a blockstructured domain with hexahedrons around the blade in combination with a tetrahedral far field in the channel is sufficient to get results which are almost as accurate as the results gained by the reference simulation. Furthermore this method is robust enough for automated processes without user input and enables comparable meshes in size, distribution and quality for different similar geometries as occurring in optimization processes.

  13. Assessing the strength of soil aggregates produced by two types of organic matter amendments using the ultrasonic energy

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaolong; minasny, Budiman; Field, Damien; Angers, Denis

    2017-04-01

    The presence of organic matter (OM) is known to stimulate the formation of soil aggregates, but the aggregation strength may vary with different amount and type/quality of OM. Conventionally wet sieving method was used to assess the aggregates' strength. In this study, we wish to get insight of the effects of different types of C inputs on aggregate dynamics using quantifiable energy via ultrasonic agitation. A clay soil with an inherently low soil organic carbon (SOC) content, was amended with two different sources of organic matter (alfalfa, C:N = 16.7 and barley straw, C:N = 95.6) at different input levels (0, 10, 20, & 30 g C kg-1 soil). The soil's inherent macro aggregates were first destroyed via puddling. The soils were incubated in pots at moisture content 70% of field capacity for a period of 3 months. The pots were housed in a 1.2L sealed opaque plastic container. The CO2 generated during the incubation was captured by a vial of NaOH which was placed in each of the sealed containers and sampled per week. At 14, 28, 56, and 84 days, soil samples were collected and the change in aggregation was assessed using a combination of wet sieving and ultrasonic agitation. The relative strength of aggregates exposed to ultrasonic agitation was modelled using the aggregate disruption characteristic curve (ADCC) and soil dispersion characteristic curve (SDCC). Both residue quality and quantity of organic matter input influenced the amount of aggregates formed and their relative strength. The MWD of soils amended with alfalfa residues was greater than that of barley straw at lower input rates and early in the incubation. In the longer term, the use of ultrasonic energy revealed that barley straw resulted in stronger aggregates, especially at higher input rates despite showing similar MWD as alfalfa. The use of ultrasonic agitation, where we quantify the energy required to liberate and disperse aggregates allowed us to differentiate the effects of C inputs on the size of stable aggregates and their relative strength.

  14. How to Assess the Signature of the Data: Catchments and Aquifers as Input Processing Systems

    NASA Astrophysics Data System (ADS)

    Lischeid, G.

    2010-12-01

    It has been argued recently that hydrological models should not only mimic observed data, but should reproduce the signatures of the data appropriately. However, there is no consent how these signatures could be assessed. In general, hydrological models aim at predicting groundwater head dynamics or hydrograph response to input signals (e.g., groundwater recharge, effective rain), based on information about structural properties of the system, like e.g., transmissivity fields, soil hydraulic conductivity, or size of the catchment water storage. That approach usually faces substantial spatial heterogeneities and nonlinear feedbacks. Here, an alternative approach is suggested for characterizing catchments or aquifers as input signal processing systems. The concept was developed for remote areas where direct anthropogenic effects (groundwater withdrawal, injection wells, etc.), plant water uptake and evaporation from groundwater and streams are negligible. Then, any increase of groundwater head or discharge is related to a corresponding input signal, i.e., groundwater recharge or effective rainfall. That signal propagates through the system and is increasingly attenuated and decelerated with increasing flowpath length. This attenuation differs from simple low-pass-filtering. E.g., different input signals propagate at different velocities, depending on rainfall intensity, antecedent soil moisture, etc. The new approach is based on a principal component analysis of time series of groundwater or lake water level, soil water content, or discharge at different sites. This information is used to for assessing the functional properties of the system rather than its structural heterogeneity at different measurement sites, and to assess first order controls on its spatial patterns. Thus, hydrologic measurements provide a mean to measure the functional properties of the system. It is suggested to use this as signatures of the data. In a next step, model structure can be optimized, focusing on representing these signatures. Furthermore, even the unknown input signal can be assessed, making the catchment or aquifer a giant effective rain sampler. Examples will be presented including heterogeneous and sparse data sets, and an extension to a more complex system with various production wells of a large water supply work.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schalk, W.W. III

    Early actions of emergency responders during hazardous material releases are intended to assess contamination and potential public exposure. As measurements are collected, an integration of model calculations and measurements can assist to better understand the situation. This study applied a high resolution version of the operational 3-D numerical models used by Lawrence Livermore National Laboratory to a limited meteorological and tracer data set to assist in the interpretation of the dispersion pattern on a 140 km scale. The data set was collected from a tracer release during the morning surface inversion and transition period in the complex terrain of themore » Snake River Plain near Idaho Falls, Idaho in November 1993 by the United States Air Force. Sensitivity studies were conducted to determine model input parameters that best represented the study environment. These studies showed that mixing and boundary layer heights, atmospheric stability, and rawinsonde data are the most important model input parameters affecting wind field generation and tracer dispersion. Numerical models and limited measurement data were used to interpret dispersion patterns through the use of data analysis, model input determination, and sensitivity studies. Comparison of the best-estimate calculation to measurement data showed that model results compared well with the aircraft data, but had moderate success with the few surface measurements taken. The moderate success of the surface measurement comparison, may be due to limited downward mixing of the tracer as a result of the model resolution determined by the domain size selected to study the overall plume dispersion. 8 refs., 40 figs., 7 tabs.« less

  16. Oceans Apart: Using Stable Isotopes to Assess the Role of Fog in Two Semi-Arid Island Ecosystems

    NASA Astrophysics Data System (ADS)

    Schmitt, S.; Riveros-Iregui, D.; Hu, J.

    2017-12-01

    Fog is a significant hydrologic input in many tropical island systems, and is a water source particularly susceptible to the effects of global climate change. To better understand the role of fog as a hydrological input in two oceanic islands, we address two principal questions: 1) Do seasonal or extreme precipitation events lead to distinguishable differences in stable isotopic signatures of water inputs within and between sites and islands? 2) Does microclimatic zonation lead to distinguishable differences in isotopic signatures of meteoric inputs between different sites on a given island? To perform this analysis, meteoric water samples (fog, rain and throughfall) were collected over three sites (one windward and two leeward) and three field seasons in San Cristobal, Galapagos to ascertain the isotopic signature of each water balance input during different times of year. An additional field season of data in Ascension Island, UK, was also used to perform a comparative analysis between islands. A stable isotope mixing model was used to determine the relative proportion of surface water and groundwater that is composed of fog, and to demonstrate spatiotemporal patterns of recharge dynamics in each island system. Local meteoric water lines were generated for each site and over each field season to determine the source of hydrologic inputs (trade wind-generated orographic precipitation versus storm precipitation) and the role of locally recycled water in the overall water balance of each site. Our results will approximate potential changes in water inputs to San Cristobal and Ascension, respectively, that could be impacted by an increase in cloud base height or a change in weather patterns brought about by climate change.

  17. What Counts as Effective Input for Word Learning?

    ERIC Educational Resources Information Center

    Shneidman, Laura A.; Arroyo, Michelle E.; Levine, Susan C.; Goldin-Meadow, Susan

    2013-01-01

    The talk children hear from their primary caregivers predicts the size of their vocabularies. But children who spend time with multiple individuals also hear talk that others direct to them, as well as talk not directed to them at all. We investigated the effect of linguistic input on vocabulary acquisition in children who routinely spent time…

  18. The Effect of Meaning-Focused Listening Input on Iranian Intermediate EFL Learners' Productive Vocabulary Size

    ERIC Educational Resources Information Center

    Noughabi, Mostafa Azari

    2017-01-01

    Vocabulary as a significant component of language learning has been widely researched. As well, it is well documented that vocabulary could be learned through listening and reading. In addition, measuring productive vocabulary has been a chief concern among scholars. However, few studies have focused on meaning-focused listening input and its…

  19. Input output scaling relations in Italian manufacturing firms

    NASA Astrophysics Data System (ADS)

    Bottazzi, Giulio; Grazzi, Marco; Secchi, Angelo

    2005-09-01

    Recent analyses on different database have proposed some regularities with respect to size and growth rates distribution of firms. In this work we explore some basic properties of the dynamics of productivity in Italian manufacturing firms. We investigate relations between different inputs and output examining the impact of productivity in shaping the pattern of corporates evolution.

  20. A mathematical model for predicting fire spread in wildland fuels

    Treesearch

    Richard C. Rothermel

    1972-01-01

    A mathematical fire model for predicting rate of spread and intensity that is applicable to a wide range of wildland fuels and environment is presented. Methods of incorporating mixtures of fuel sizes are introduced by weighting input parameters by surface area. The input parameters do not require a prior knowledge of the burning characteristics of the fuel.

  1. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size.

    PubMed

    Schwalger, Tilo; Deger, Moritz; Gerstner, Wulfram

    2017-04-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.

  2. Constraining the phantom braneworld model from cosmic structure sizes

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sourav; Kousvos, Stefanos R.

    2017-11-01

    We consider the phantom braneworld model in the context of the maximum turnaround radius, RTA ,max, of a stable, spherical cosmic structure with a given mass. The maximum turnaround radius is the point where the attraction due to the central inhomogeneity gets balanced with the repulsion of the ambient dark energy, beyond which a structure cannot hold any mass, thereby giving the maximum upper bound on the size of a stable structure. In this work we derive an analytical expression of RTA ,max for this model using cosmological scalar perturbation theory. Using this we numerically constrain the parameter space, including a bulk cosmological constant and the Weyl fluid, from the mass versus observed size data for some nearby, nonvirial cosmic structures. We use different values of the matter density parameter Ωm, both larger and smaller than that of the Λ cold dark matter, as the input in our analysis. We show in particular, that (a) with a vanishing bulk cosmological constant the predicted upper bound is always greater than what is actually observed; a similar conclusion holds if the bulk cosmological constant is negative (b) if it is positive, the predicted maximum size can go considerably below than what is actually observed and owing to the involved nature of the field equations, it leads to interesting constraints on not only the bulk cosmological constant itself but on the whole parameter space of the theory.

  3. Evaluation of Bio-optical Models for Discriminating Phytoplankton Functional Types and Size Classes in Eastern U.S. Coastal Waters with Approaches to Remote Sensing Applications

    NASA Astrophysics Data System (ADS)

    Neeley, A. R.; Goes, J. I.; Jenkins, C. A.; Harris, L.

    2016-02-01

    Phytoplankton species can be separated into phytoplankton functional types (PFTs) or size classes (PSCs; Micro-, Nano-, and Picoplankton). Bio-optical models have been developed to use satellite-derived products to discriminate PSCs and PFTs, a recommended field measurement for the future NASA PACE mission. The proposed 5 nm spectral resolution of the PACE ocean color sensor will improve detection of PSCs and PFTs by discriminating finer optical features not detected at the spectral resolution of current satellite-borne instruments. In preparation for PACE, new and advanced models are under development that require accurate data for validation. Phytoplankton pigment data have long been collected from aquatic environments and are widely used to model PSC and PFT abundances using two well-known methods: Diagnostic Pigment Analysis (DPA) and Chemical Taxonomy (ChemTax), respectively. Here we present the results of an effort to evaluate five bio-optical PFT models using data from a field campaign off the coast of the Eastern U.S. in November 2014: two based on biomass (Chlorophyll a), two based on light absorption properties of phytoplankton and one based the inversion of remote sensing reflectances. PFT model performance is evaluated using phytoplankton taxonomic data from a FlowCam sensor and DPA and ChemTax analyses using pigment data collected during the field campaign in a variety of water types and optical complexities (e.g., coastal, blue water, eddies and fronts). Relative strengths of the model approaches will be presented as a model validation exercise using both in situ and satellite derived input products.

  4. Size invariance does not hold for connectionist models: dangers of using a toy model.

    PubMed

    Yamaguchi, Makoto

    2004-03-01

    Connectionist models with backpropagation learning rule are known to have a serious problem called catastrophic interference or forgetting, although there have been several reports showing that the interference can be relatively mild with orthogonal inputs. The present study investigated the extent of interference using orthogonal inputs with varying network sizes. One would naturally assume that results obtained from small networks could be extrapolated for larger networks. Unexpectedly, the use of small networks was shown to worsen performance. This result has important implications for interpreting some data in the literature and cautions against the use of a toy model. Copyright 2004 Lippincott Williams & Wilkins

  5. Reconstructing solar magnetic fields from historical observations. II. Testing the surface flux transport model

    NASA Astrophysics Data System (ADS)

    Virtanen, I. O. I.; Virtanen, I. I.; Pevtsov, A. A.; Yeates, A.; Mursula, K.

    2017-07-01

    Aims: We aim to use the surface flux transport model to simulate the long-term evolution of the photospheric magnetic field from historical observations. In this work we study the accuracy of the model and its sensitivity to uncertainties in its main parameters and the input data. Methods: We tested the model by running simulations with different values of meridional circulation and supergranular diffusion parameters, and studied how the flux distribution inside active regions and the initial magnetic field affected the simulation. We compared the results to assess how sensitive the simulation is to uncertainties in meridional circulation speed, supergranular diffusion, and input data. We also compared the simulated magnetic field with observations. Results: We find that there is generally good agreement between simulations and observations. Although the model is not capable of replicating fine details of the magnetic field, the long-term evolution of the polar field is very similar in simulations and observations. Simulations typically yield a smoother evolution of polar fields than observations, which often include artificial variations due to observational limitations. We also find that the simulated field is fairly insensitive to uncertainties in model parameters or the input data. Due to the decay term included in the model the effects of the uncertainties are somewhat minor or temporary, lasting typically one solar cycle.

  6. Factors controlling the size of graphene oxide sheets produced via the graphite oxide route.

    PubMed

    Pan, Shuyang; Aksay, Ilhan A

    2011-05-24

    We have studied the effect of the oxidation path and the mechanical energy input on the size of graphene oxide sheets derived from graphite oxide. The cross-planar oxidation of graphite from the (0002) plane results in periodic cracking of the uppermost graphene oxide layer, limiting its lateral dimension to less than 30 μm. We use an energy balance between the elastic strain energy associated with the undulation of graphene oxide sheets at the hydroxyl and epoxy sites, the crack formation energy, and the interaction energy between graphene layers to determine the cell size of the cracks. As the effective crack propagation rate in the cross-planar direction is an order of magnitude smaller than the edge-to-center oxidation rate, graphene oxide single sheets larger than those defined by the periodic cracking cell size are produced depending on the aspect ratio of the graphite particles. We also demonstrate that external energy input from hydrodynamic drag created by fluid motion or sonication, further reduces the size of the graphene oxide sheets through tensile stress buildup in the sheets.

  7. Metamodeling as a tool to size vegetative filter strips for surface runoff pollution control in European watersheds.

    NASA Astrophysics Data System (ADS)

    Lauvernet, Claire; Muñoz-Carpena, Rafael; Carluer, Nadia

    2015-04-01

    In Europe, a significant presence of contaminants is found in surface water, partly due to pesticide applications. Vegetative filter strips or buffer zones (VFS), often located along rivers, are a common best management practice (BMP) to reduce non point source pollution of water by reducing surface runoff. However, they need to be adapted to the agro-ecological and climatic conditions, both in terms of position and size, in order to be efficient. The TOPPS-PROWADIS project involves European experts and stakeholders to develop and recommend BMPs to reduce pesticide transfer by drift or runoff in several European countries. In this context, IRSTEA developed a guide accompanying the use of different tools, which allows designing site-specific VFS by simulating their efficiency to limit transfers using the mechanistic model VFSMOD. This method which is very complete assumes that the user provides detailed field knowledge and data, which are not always easily available. The aim of this study is to assist the buffer sizing by using a unique tool with a reduced set of parameters, adapted to the available information from the end-users. In order to fill in the lack of real data in many practical applications, a set of virtual scenarios was selected to encompass a large range of agro-pedo-climatic conditions in Europe, considering both the upslope agricultural field and the VFS characteristics. As a first step first, in this work we present scenarios based on North-West of France climate consisting of different rainfall intensities and durations, hillslope lengths and slopes, humidity conditions, a large set of field rainfall/runoff characteristics for the contributing area, and several shallow water table depths and soil types for the VFS. The sizing method based on the mechanistic model VFSMOD was applied for all these scenarios, and a global sensitivity analysis (GSA) of the VFS optimal length was performed for all the input parameters in order to understand their influence and interactions, and set priorities for data collecting and management. Based on GSA results, we compared several mathematical methods to compute the metamodel, and then validated it on an agricultural watershed with real data in the North-West of France. The analysis procedure allows for a robust and validated metamodel, before extending it on other climatic conditions in order to make the application on a large range of european watersheds possible. The tool will allow comparison of field scenarios, and to validate/improve actual existing placements and VFS sizing.

  8. Application of the graphics processor unit to simulate a near field diffraction

    NASA Astrophysics Data System (ADS)

    Zinchik, Alexander A.; Topalov, Oleg K.; Muzychenko, Yana B.

    2017-06-01

    For many years, computer modeling program used for lecture demonstrations. Most of the existing commercial software, such as Virtual Lab, LightTrans GmbH company are quite expensive and have a surplus capabilities for educational tasks. The complexity of the diffraction demonstrations in the near zone, due to the large amount of calculations required to obtain the two-dimensional distribution of the amplitude and phase. At this day, there are no demonstrations, allowing to show the resulting distribution of amplitude and phase without much time delay. Even when using Fast Fourier Transform (FFT) algorithms diffraction calculation speed in the near zone for the input complex amplitude distributions with size more than 2000 × 2000 pixels is tens of seconds. Our program selects the appropriate propagation operator from a prescribed set of operators including Spectrum of Plane Waves propagation and Rayleigh-Sommerfeld propagation (using convolution). After implementation, we make a comparison between the calculation time for the near field diffraction: calculations made on GPU and CPU, showing that using GPU for calculations diffraction pattern in near zone does increase the overall speed of algorithm for an image of size 2048×2048 sampling points and more. The modules are implemented as separate dynamic-link libraries and can be used for lecture demonstrations, workshops, selfstudy and students in solving various problems such as the phase retrieval task.

  9. Intelligent Gearbox Diagnosis Methods Based on SVM, Wavelet Lifting and RBR

    PubMed Central

    Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng

    2010-01-01

    Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis. PMID:22399894

  10. KENIS: a high-performance thermal imager developed using the OSPREY IR detector

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.; Baker, Ian M.

    2000-07-01

    `KENIS', a complete, high performance, compact and lightweight thermal imager, is built around the `OSPREY' infrared detector from BAE systems Infrared Ltd. The `OSPREY' detector uses a 384 X 288 element CMT array with a 20 micrometers pixel size and cooled to 120 K. The relatively small pixel size results in very compact cryogenics and optics, and the relatively high operating temperature provides fast start-up time, low power consumption and long operating life. Requiring single input supply voltage and consuming less than 30 watts of power, the thermal imager generates both analogue and digital format outputs. The `KENIS' lens assembly features a near diffraction limited dual field-of-view optical system that has been designed to be athermalized and switches between fields in less than one second. The `OSPREY' detector produces near background limited performance with few defects and has special, pixel level circuitry to eliminate crosstalk and blooming effects. This, together with signal processing based on an effective two-point fixed pattern noise correction algorithm, results in high quality imagery and a thermal imager that is suitable for most traditional thermal imaging applications. This paper describes the rationale used in the development of the `KENIS' thermal imager, and highlights the potential performance benefits to the user's system, primarily gained by selecting the `OSPREY' infra-red detector within the core of the thermal imager.

  11. Intelligent gearbox diagnosis methods based on SVM, wavelet lifting and RBR.

    PubMed

    Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng

    2010-01-01

    Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis.

  12. Airline return-on-investment model for technology evaluation. [computer program to measure economic value of advanced technology applied to passenger aircraft

    NASA Technical Reports Server (NTRS)

    1974-01-01

    This report presents the derivation, description, and operating instructions for a computer program (TEKVAL) which measures the economic value of advanced technology features applied to long range commercial passenger aircraft. The program consists of three modules; and airplane sizing routine, a direct operating cost routine, and an airline return-on-investment routine. These modules are linked such that they may be operated sequentially or individually, with one routine generating the input for the next or with the option of externally specifying the input for either of the economic routines. A very simple airplane sizing technique was previously developed, based on the Brequet range equation. For this program, that sizing technique has been greatly expanded and combined with the formerly separate DOC and ROI programs to produce TEKVAL.

  13. Photonic Waveguide Choke Joint with Non-Absorptive Loading

    NASA Technical Reports Server (NTRS)

    Wollack, Edward J. (Inventor); U-Yen, Kongpop (Inventor); Chuss, David T. (Inventor)

    2016-01-01

    A waveguide choke joint includes a first array of pillars positioned on a substrate, each pillar in the first array of pillars having a first size and configured to receive an input plane wave at a first end of the choke joint. The choke joint has a second end configured to transmit the input plane wave away from the choke joint. The choke joint further includes a second array of pillars positioned on the substrate between the first array of pillars and the second end of the choke joint. Each pillar in the second array of pillars has a second size. The choke joint also has a third array of pillars positioned on the substrate between the second array and the second end of the choke joint. Each pillar in the third array of pillars has a third size.

  14. Quantal and Nonquantal Transmission in Calyx-Bearing Fibers of the Turtle Posterior Crista

    PubMed Central

    Holt, Joseph C.; Chatlani, Shilpa; Lysakowski, Anna; Goldberg, Jay M.

    2010-01-01

    Intracellular recordings were made from nerve fibers in the posterior ampullary nerve near the neuroepithelium. Calyx-bearing afferents were identified by their distinctive efferent-mediated responses. Such fibers receive inputs from both type I and type II hair cells. Type II inputs are made by synapses on the outer face of the calyx ending and on the boutons of dimorphic fibers. Quantal activity, consisting of brief mEPSPs, is reduced by lowering the external concentration of Ca2+ and blocked by the AMPA-receptor antagonist CNQX. Poisson statistics govern the timing of mEPSPs, which occur at high rates (250–2,500/s) in the absence of mechanical stimulation. Excitation produced by canal-duct indentation can increase mEPSP rates to nearly 5,000/s. As the rate increases, mEPSPs can change from a monophasic depolarization to a biphasic depolarizing– hyperpolarizing sequence, both of whose components are blocked by CNQX. Blockers of voltage-gated currents affect mEPSP size, which is decreased by TTX and is increased by linopirdine. mEPSP size decreases several fold after impalement. The size decrease, although it may be triggered by the depolarization occurring during impalement, persists even at hyperpolarized membrane potentials. Nonquantal transmission is indicated by shot-noise calculations and by the presence of voltage modulations after quantal activity is abolished pharmacologically. An ultrastructural study shows that inner-face inputs from type I hair cells outnumber outer-face inputs from type II hair cells by an almost 6:1 ratio. PMID:17596419

  15. Representation of Non-Spatial and Spatial Information in the Lateral Entorhinal Cortex

    PubMed Central

    Deshmukh, Sachin S.; Knierim, James J.

    2011-01-01

    Some theories of memory propose that the hippocampus integrates the individual items and events of experience within a contextual or spatial framework. The hippocampus receives cortical input from two major pathways: the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). During exploration in an open field, the firing fields of MEC grid cells form a periodically repeating, triangular array. In contrast, LEC neurons show little spatial selectivity, and it has been proposed that the LEC may provide non-spatial input to the hippocampus. Here, we recorded MEC and LEC neurons while rats explored an open field that contained discrete objects. LEC cells fired selectively at locations relative to the objects, whereas MEC cells were weakly influenced by the objects. These results provide the first direct demonstration of a double dissociation between LEC and MEC inputs to the hippocampus under conditions of exploration typically used to study hippocampal place cells. PMID:22065409

  16. Optimized mode-field adapter for low-loss fused fiber bundle signal and pump combiners

    NASA Astrophysics Data System (ADS)

    Koška, Pavel; Baravets, Yauhen; Peterka, Pavel; Písařík, Michael; Bohata, Jan

    2015-03-01

    In our contribution we report novel mode field adapter incorporated inside bundled tapered pump and signal combiner. Pump and signal combiners are crucial component of contemporary double clad high power fiber lasers. Proposed combiner allows simultaneous matching to single mode core on input and output. We used advanced optimization techniques to match the combiner to a single mode core simultaneously on input and output and to minimalize losses of the combiner signal branch. We designed two arrangements of combiners' mode field adapters. Our numerical simulations estimates losses in signal branches of optimized combiners of 0.23 dB for the first design and 0.16 dB for the second design for SMF-28 input fiber and SMF-28 matched output double clad fiber for the wavelength of 2000 nm. The splice losses of the actual combiner are expected to be even lower thanks to dopant diffusion during the splicing process.

  17. A Secure and Reliable High-Performance Field Programmable Gate Array for Information Processing

    DTIC Science & Technology

    2012-03-01

    receives a data token from its control input (shown as a horizontal arrow above). The value of this data token is used to select an input port. The input...dual of a merge. It receives a data token from its control input (shown as a horizontal arrow above). The value of this data token is used to select...Transactions on Computer-Aided Design of Intergrated Circuits and Systems, Vol. 26, No. 2, February 2007. [12] Cadence Design Systems, “Clock Domain

  18. Discrimination between induced, triggered, and natural earthquakes close to hydrocarbon reservoirs: A probabilistic approach based on the modeling of depletion-induced stress changes and seismological source parameters

    NASA Astrophysics Data System (ADS)

    Dahm, Torsten; Cesca, Simone; Hainzl, Sebastian; Braun, Thomas; Krüger, Frank

    2015-04-01

    Earthquakes occurring close to hydrocarbon fields under production are often under critical view of being induced or triggered. However, clear and testable rules to discriminate the different events have rarely been developed and tested. The unresolved scientific problem may lead to lengthy public disputes with unpredictable impact on the local acceptance of the exploitation and field operations. We propose a quantitative approach to discriminate induced, triggered, and natural earthquakes, which is based on testable input parameters. Maxima of occurrence probabilities are compared for the cases under question, and a single probability of being triggered or induced is reported. The uncertainties of earthquake location and other input parameters are considered in terms of the integration over probability density functions. The probability that events have been human triggered/induced is derived from the modeling of Coulomb stress changes and a rate and state-dependent seismicity model. In our case a 3-D boundary element method has been adapted for the nuclei of strain approach to estimate the stress changes outside the reservoir, which are related to pore pressure changes in the field formation. The predicted rate of natural earthquakes is either derived from the background seismicity or, in case of rare events, from an estimate of the tectonic stress rate. Instrumentally derived seismological information on the event location, source mechanism, and the size of the rupture plane is of advantage for the method. If the rupture plane has been estimated, the discrimination between induced or only triggered events is theoretically possible if probability functions are convolved with a rupture fault filter. We apply the approach to three recent main shock events: (1) the Mw 4.3 Ekofisk 2001, North Sea, earthquake close to the Ekofisk oil field; (2) the Mw 4.4 Rotenburg 2004, Northern Germany, earthquake in the vicinity of the Söhlingen gas field; and (3) the Mw 6.1 Emilia 2012, Northern Italy, earthquake in the vicinity of a hydrocarbon reservoir. The three test cases cover the complete range of possible causes: clearly "human induced," "not even human triggered," and a third case in between both extremes.

  19. Determination and representation of electric charge distributions associated with adverse weather conditions

    NASA Technical Reports Server (NTRS)

    Rompala, John T.

    1992-01-01

    Algorithms are presented for determining the size and location of electric charges which model storm systems and lightning strikes. The analysis utilizes readings from a grid of ground level field mills and geometric constraints on parameters to arrive at a representative set of charges. This set is used to generate three dimensional graphical depictions of the set as well as contour maps of the ground level electrical environment over the grid. The composite, analytic and graphic package is demonstrated and evaluated using controlled input data and archived data from a storm system. The results demonstrate the packages utility as: an operational tool in appraising adverse weather conditions; a research tool in studies of topics such as storm structure, storm dynamics, and lightning; and a tool in designing and evaluating grid systems.

  20. U.S. Geological Survey ArcMap Sediment Classification tool

    USGS Publications Warehouse

    O'Malley, John

    2007-01-01

    The U.S. Geological Survey (USGS) ArcMap Sediment Classification tool is a custom toolbar that extends the Environmental Systems Research Institute, Inc. (ESRI) ArcGIS 9.2 Desktop application to aid in the analysis of seabed sediment classification. The tool uses as input either a point data layer with field attributes containing percentage of gravel, sand, silt, and clay or four raster data layers representing a percentage of sediment (0-100%) for the various sediment grain size analysis: sand, gravel, silt and clay. This tool is designed to analyze the percent of sediment at a given location and classify the sediments according to either the Folk (1954, 1974) or Shepard (1954) as modified by Schlee(1973) classification schemes. The sediment analysis tool is based upon the USGS SEDCLASS program (Poppe, et al. 2004).

  1. Effect of wake structure on blade-vortex interaction phenomena: Acoustic prediction and validation

    NASA Technical Reports Server (NTRS)

    Gallman, Judith M.; Tung, Chee; Schultz, Klaus J.; Splettstoesser, Wolf; Buchholz, Heino

    1995-01-01

    During the Higher Harmonic Control Aeroacoustic Rotor Test, extensive measurements of the rotor aerodynamics, the far-field acoustics, the wake geometry, and the blade motion for powered, descent, flight conditions were made. These measurements have been used to validate and improve the prediction of blade-vortex interaction (BVI) noise. The improvements made to the BVI modeling after the evaluation of the test data are discussed. The effects of these improvements on the acoustic-pressure predictions are shown. These improvements include restructuring the wake, modifying the core size, incorporating the measured blade motion into the calculations, and attempting to improve the dynamic blade response. A comparison of four different implementations of the Ffowcs Williams and Hawkings equation is presented. A common set of aerodynamic input has been used for this comparison.

  2. Reflections on the Future of Pharmaceutical Public-Private Partnerships: From Input to Impact.

    PubMed

    de Vrueh, Remco L A; Crommelin, Daan J A

    2017-10-01

    Public Private Partnerships (PPPs) are multiple stakeholder partnerships designed to improve research efficacy. We focus on PPPs in the biomedical/pharmaceutical field, which emerged as a logical result of the open innovation model. Originally, a typical PPP was based on an academic and an industrial pillar, with governmental or other third party funding as an incentive. Over time, other players joined in, often health foundations, patient organizations, and regulatory scientists. This review discusses reasons for initiating a PPP, focusing on precompetitive research. It looks at typical expectations and challenges when starting such an endeavor, the characteristics of PPPs, and approaches to assessing the success of the concept. Finally, four case studies are presented, of PPPs differing in size, geographical spread, and research focus.

  3. Complexity and non-commutativity of learning operations on graphs.

    PubMed

    Atmanspacher, Harald; Filk, Thomas

    2006-07-01

    We present results from numerical studies of supervised learning operations in small recurrent networks considered as graphs, leading from a given set of input conditions to predetermined outputs. Graphs that have optimized their output for particular inputs with respect to predetermined outputs are asymptotically stable and can be characterized by attractors, which form a representation space for an associative multiplicative structure of input operations. As the mapping from a series of inputs onto a series of such attractors generally depends on the sequence of inputs, this structure is generally non-commutative. Moreover, the size of the set of attractors, indicating the complexity of learning, is found to behave non-monotonically as learning proceeds. A tentative relation between this complexity and the notion of pragmatic information is indicated.

  4. Development of a size reduction equation for woody biomass: The influence of branch wood properties on Rittinger's constant

    DOE PAGES

    Naimi, Ladan J.; Sokhansanj, Shahabaddine; Bi, Xiaotao; ...

    2015-11-25

    Size reduction is an essential but energy-intensive process for preparing biomass for conversion processes. Three well-known scaling equations (Bond, Kick, and Rittinger) are used to estimate energy input for grinding minerals and food particles. Previous studies have shown that the Rittinger equation has the best fit to predict energy input for grinding cellulosic biomass. In the Rittinger equation, Rittinger's constant (k R) is independent of the size of ground particles, yet we noted large variations in k R among similar particle size ranges. In this research, the dependence of k R on the physical structure and chemical composition of amore » number of woody materials was explored. Branches from two softwood species (Douglas fir and pine) and two hardwood species (aspen and poplar) were ground in a laboratory knife mill. The recorded data included power input, mass flow rate, and particle size before and after grinding. Nine material properties were determined: particle density, solid density (pycnometer and x-ray diffraction methods), microfibril angle, fiber coarseness, fiber length, and composition (lignin and cellulose glucan contents). The correlation matrix among the nine properties revealed high degrees of interdependence between properties. The k R value had the largest positive correlation (+0.60) with particle porosity across the species tested. As a result, particle density was strongly correlated with lignin content (0.85), microfibril angle (0.71), fiber length (0.87), and fiber coarseness (0.78). An empirical model relating k R to particle density was developed.« less

  5. Development of a size reduction equation for woody biomass: The influence of branch wood properties on Rittinger's constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naimi, Ladan J.; Sokhansanj, Shahabaddine; Bi, Xiaotao

    Size reduction is an essential but energy-intensive process for preparing biomass for conversion processes. Three well-known scaling equations (Bond, Kick, and Rittinger) are used to estimate energy input for grinding minerals and food particles. Previous studies have shown that the Rittinger equation has the best fit to predict energy input for grinding cellulosic biomass. In the Rittinger equation, Rittinger's constant (k R) is independent of the size of ground particles, yet we noted large variations in k R among similar particle size ranges. In this research, the dependence of k R on the physical structure and chemical composition of amore » number of woody materials was explored. Branches from two softwood species (Douglas fir and pine) and two hardwood species (aspen and poplar) were ground in a laboratory knife mill. The recorded data included power input, mass flow rate, and particle size before and after grinding. Nine material properties were determined: particle density, solid density (pycnometer and x-ray diffraction methods), microfibril angle, fiber coarseness, fiber length, and composition (lignin and cellulose glucan contents). The correlation matrix among the nine properties revealed high degrees of interdependence between properties. The k R value had the largest positive correlation (+0.60) with particle porosity across the species tested. As a result, particle density was strongly correlated with lignin content (0.85), microfibril angle (0.71), fiber length (0.87), and fiber coarseness (0.78). An empirical model relating k R to particle density was developed.« less

  6. A High Input Impedance Low Noise Integrated Front-End Amplifier for Neural Monitoring.

    PubMed

    Zhou, Zhijun; Warr, Paul A

    2016-12-01

    Within neural monitoring systems, the front-end amplifier forms the critical element for signal detection and pre-processing, which determines not only the fidelity of the biosignal, but also impacts power consumption and detector size. In this paper, a novel combined feedback loop-controlled approach is proposed to compensate for input leakage currents generated by low noise amplifiers when in integrated circuit form alongside signal leakage into the input bias network. This loop topology ensures the Front-End Amplifier (FEA) maintains a high input impedance across all manufacturing and operational variations. Measured results from a prototype manufactured on the AMS 0.35 [Formula: see text] CMOS technology is provided. This FEA consumes 3.1 [Formula: see text] in 0.042 [Formula: see text], achieves input impedance of 42 [Formula: see text], and 18.2 [Formula: see text] input-referred noise.

  7. Saliency Detection on Light Field.

    PubMed

    Li, Nianyi; Ye, Jinwei; Ji, Yu; Ling, Haibin; Yu, Jingyi

    2017-08-01

    Existing saliency detection approaches use images as inputs and are sensitive to foreground/background similarities, complex background textures, and occlusions. We explore the problem of using light fields as input for saliency detection. Our technique is enabled by the availability of commercial plenoptic cameras that capture the light field of a scene in a single shot. We show that the unique refocusing capability of light fields provides useful focusness, depths, and objectness cues. We further develop a new saliency detection algorithm tailored for light fields. To validate our approach, we acquire a light field database of a range of indoor and outdoor scenes and generate the ground truth saliency map. Experiments show that our saliency detection scheme can robustly handle challenging scenarios such as similar foreground and background, cluttered background, complex occlusions, etc., and achieve high accuracy and robustness.

  8. Behavior of farmers in regard to erosion by water as reflected by their farming practices.

    PubMed

    Auerswald, Karl; Fischer, Franziska K; Kistler, Michael; Treisch, Melanie; Maier, Harald; Brandhuber, Robert

    2018-02-01

    The interplay between natural site conditions and farming raises erosion by water above geological background levels. We examined the hypothesis that farmers take erosion into account in their farming decisions and switch to farming practices with lower erosion risk the higher the site-specific hazard becomes. Erosion since the last tillage was observed from aerial orthorectified photographs for 8100 fields belonging to 1879 farmers distributed across Bavaria (South Germany) and it was modeled by the Universal Soil Loss Equation using highly detailed input data (e.g., digital terrain model with 5×5m 2 resolution, rain data with 1×1km 2 and 5min resolution, crop and cropping method from annual field-specific data from incentive schemes). Observed and predicted soil loss correlated closely, demonstrating the accuracy of this method. The close correlation also indicted that the farmers could easily observe the degree of recent erosion on their fields, even without modelling. Farmers clearly did not consider erosion in their decisions. When natural risk increased, e.g. due to steeper slopes, they neither grew crops with lower erosion potential, nor reduced field size, nor used contouring. In addition, they did not compensate for the cultivation of crops with higher erosion potential by using conservation techniques like mulch tillage or contouring, or by reducing field size. Only subsidized measures, like mulch tillage or organic farming, were applied but only at the absolute minimum that was necessary to obtain subsidies. However, this did not achieve the reduction in erosion that would be possible if these measures had been fully applied. We conclude that subsidies may be an appropriate method of reducing erosion but the present weak supervision, which assumes that farmers themselves will take erosion into account and that subsidies are only needed to compensate for any disadvantages caused by erosion-reducing measures, is clearly not justified. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Potential Field Modeling at Global to Prospect Scales - Adding Value to the Geological, Seismic, Gravity, Magnetic and Rock Property Datasets of Australia

    NASA Astrophysics Data System (ADS)

    Lane, R. J. L.

    2015-12-01

    At Geoscience Australia, we are upgrading our gravity and magnetic modeling tools to provide new insights into the composition, properties, and structure of the subsurface. The scale of the investigations varies from the size of tectonic plates to the size of a mineral prospect. To accurately model potential field data at all of these scales, we require modeling software that can operate in both spherical and Cartesian coordinate frameworks. The models are in the form of a mesh, with spherical prismatic (tesseroid) elements for spherical coordinate models of large volumes, and rectangular prisms for smaller volumes evaluated in a Cartesian coordinate framework. The software can compute the forward response of supplied rock property models and can perform inversions using constraints that vary from weak generic smoothness through to very specific reference models compiled from various types of "hard facts" (i.e., surface mapping, drilling information, crustal seismic interpretations). To operate efficiently, the software is being specifically developed to make use of the resources of the National Computational Infrastructure (NCI) at the Australian National University (ANU). The development of these tools is been carried out in collaboration with researchers from the Colorado School of Mines (CSM) and the China University of Geosciences (CUG) and is at the stage of advanced testing. The creation of individual 3D geological models will provide immediate insights. Users will also be able to combine models, either by stitching them together or by nesting smaller and more detailed models within a larger model. Comparison of the potential field response of a composite model with the observed fields will give users a sense of how comprehensively these models account for the observations. Users will also be able to model the residual fields (i.e., the observed minus calculated response) to discover features that are not represented in the input composite model.

  10. Immunity of medical electrical equipment to radiated RF disturbances

    NASA Astrophysics Data System (ADS)

    Mocha, Jan; Wójcik, Dariusz; Surma, Maciej

    2018-04-01

    Immunity of medical equipment to radiated radio frequency (RF) electromagnetic (EM) fields is a priority issue owing to the functions that the equipment is intended to perform. This is reflected in increasingly stringent normative requirements that medical electrical equipment has to conform to. A new version of the standard concerning electromagnetic compatibility of medical electrical equipment IEC 60601-1-2:2014 has recently been published. The paper discusses major changes introduced in this edition of the standard. The changes comprise more rigorous immunity requirements for medical equipment as regards radiated RF EM fields and a new requirement for testing the immunity of medical electrical equipment to disturbances coming from digital radio communication systems. Further on, the paper presents two typical designs of the input block: involving a multi-level filtering and amplification circuit and including a solution which integrates an input amplifier and an analog-to-digital converter in one circuit. Regardless of the applied solution, presence of electromagnetic disturbances in the input block leads to demodulation of the disturbance signal envelope. The article elaborates on mechanisms of amplitude detection occurring in such cases. Electromagnetic interferences penetration from the amplifier's input to the output is also described in the paper. If the aforementioned phenomena are taken into account, engineers will be able to develop a more conscious approach towards the issue of immunity to RF EM fields in the process of designing input circuits in medical electrical equipment.

  11. Increased expression of c-fos in the medial preoptic area after mating in male rats: role of afferent inputs from the medial amygdala and midbrain central tegmental field.

    PubMed

    Baum, M J; Everitt, B J

    1992-10-01

    Immunocytochemical methods were used to localize the protein product of the immediate-early gene, c-fos, in male rats after exposure to, or direct physical interaction with, oestrous females. Increasing amounts of physical contact with a female, with resultant olfactory-vomeronasal and/or genital-somatosensory inputs, caused corresponding increments in c-fos expression in the medial preoptic area, the caudal part of the bed nucleus of the stria terminalis, the medial amygdala, and the midbrain central tegmental field. Males bearing unilateral electrothermal lesions of the olfactory peduncle showed a significant reduction in c-fos expression in the ipsilateral medial amygdala, but not in other structures, provided their coital interaction with oestrous females was restricted to mount-thrust and occasional intromissive patterns due to repeated application of lidocaine anaesthetic to the penis. No such lateralization of c-fos expression occurred in other males with unilateral olfactory lesions which were allowed to intromit and ejaculate with a female. These results suggest that olfactory inputs, possibly of vomeronasal origin, contribute to the activation of c-fos in the medial amygdala. However, lesion-induced deficits in this type of afferent input to the nervous system appear to be readily compensated for by the genital somatosensory input derived from repeated intromissions. Unilateral excitotoxic lesions of the medial preoptic area, made by infusing quinolinic acid, failed to reduce c-fos expression in the ipsilateral or contralateral medial amygdala or central tegmental field following ejaculation. By contrast, combined, unilateral excitotoxic lesions of the medial amygdala and the central tegmental field significantly reduced c-fos expression in the ipsilateral bed nucleus of the stria terminalis and medial preoptic area after mating; no such asymmetry in c-fos expression occurred when lesions were restricted to either the medial amygdala or central tegmental field. This suggests that afferent inputs from the central tegmental field (probably of genital-somatosensory origin) and from the medial amygdala (probably of olfactory-vomeronasal origin) interact to promote cellular activity, and the resultant induction of c-fos, in the ipsilateral bed nucleus of the stria terminalis and medial preoptic area. The monitoring of neuronal c-fos expression provides an effective means of studying the role of sensory factors in governing the activity of integrated neural structures which control the expression of a complex social behaviour.

  12. Continuous zoom antenna for mobile visible light communication.

    PubMed

    Zhang, Xuebin; Tang, Yi; Cui, Lu; Bai, Tingzhu

    2015-11-10

    In this paper, we design a continuous zoom antenna for mobile visible light communication (VLC). In the design, a right-angle reflecting prism was adopted to fold the space optical path, thus decreasing the antenna thickness. The surface of each lens in the antenna is spherical, and the system cost is relatively low. Simulation results indicated that the designed system achieved the following performance: zoom ratio of 2.44, field of view (FOV) range of 18°-48°, system gain of 16.8, and system size of 18 mm×6  mm. Finally, we established an indoor VLC system model in a room the size of 5  m ×5  m ×3  m and compared the detection results of the zoom antenna and fixed-focus antenna obtained in a multisource communication environment, a mobile VLC environment, and a multiple-input multiple-output communication environment. The simulation results indicated that the continuous zoom antenna could realize large FOV and high gain. Moreover, the system showed improved stability, mobility, and environmental applicability.

  13. Preliminary sizing and performance of aircraft

    NASA Technical Reports Server (NTRS)

    Fetterman, D. E., Jr.

    1985-01-01

    The basic processes of a program that performs sizing operations on a baseline aircraft and determines their subsequent effects on aerodynamics, propulsion, weights, and mission performance are described. Input requirements are defined and output listings explained. Results obtained by applying the method to several types of aircraft are discussed.

  14. Voltages induced on a power distribution line by overhead cloud lightning

    NASA Technical Reports Server (NTRS)

    Yacoub, Ziad; Rubinstein, Marcos; Uman, Martin A.; Thomson, Ewen M.; Medelius, Pedro J.

    1991-01-01

    Voltages induced by overhead cloud lightning on a 448 m open circuited power distribution line and the corresponding north-south component of the lightning magnetic field were simultaneously measured at the NASA Kennedy Space Center during the summer of 1986. The incident electric field was calculated from the measured magnetic field. The electric field was then used as an input to the computer program, EMPLIN, that calculated the voltages at the two ends of the power line. EMPLIN models the frequency domain field/power coupling theory found, for example, in Ianoz et al. The direction of the source, which is also one of the inputs to EMPLIN, was crudely determined from a three station time delay technique. The authors found reasonably good agreement between calculated and measured waveforms.

  15. Achieving nonlinear optical modulation via four-wave mixing in a four-level atomic system

    NASA Astrophysics Data System (ADS)

    Li, Hai-Chao; Ge, Guo-Qin; Zubairy, M. Suhail

    2018-05-01

    We propose an accessible scheme for implementing tunable nonlinear optical amplification and attenuation via a synergetic mechanism of four-wave mixing (FWM) and optical interference in a four-level ladder-type atomic system. By constructing a cyclic atom-field interaction, we show that two reverse FWM processes can coexist via optical transitions in different branches. In the suitable input-field conditions, strong interference effects between the input fields and the generated FWM fields can be induced and result in large amplification and deep attenuation of the output fields. Moreover, such an optical modulation from enhancement to suppression can be controlled by tuning the relative phase. The quantum system can be served as a switchable optical modulator with potential applications in quantum nonlinear optics.

  16. Large woody debris budgets in the Caspar Creek Experimental Watersheds

    Treesearch

    Sue Hilton

    2012-01-01

    Monitoring of large woody debris (LWD) in the two mainstem channels of the Caspar Creek Experimental Watersheds since 1998, combined with older data from other work in the watersheds, gives estimates of channel wood input rates, survival, and outputs in intermediate-sized channels in coastal redwood forests. Input rates from standing trees for the two reaches over a 15...

  17. Statistics & Input-Output Measures for School Libraries in Colorado, 2002.

    ERIC Educational Resources Information Center

    Colorado State Library, Denver.

    This document presents statistics and input-output measures for K-12 school libraries in Colorado for 2002. Data are presented by type and size of school, i.e., high schools (six categories ranging from 2,000 and over to under 300), junior high/middle schools (five categories ranging from 1,000-1,999 to under 300), elementary schools (four…

  18. Digital Control Technologies for Modular DC-DC Converters

    NASA Technical Reports Server (NTRS)

    Button, Robert M.; Kascak, Peter E.; Lebron-Velilla, Ramon

    2002-01-01

    Recent trends in aerospace Power Management and Distribution (PMAD) systems focus on using commercial off-the-shelf (COTS) components as standard building blocks. This move to more modular designs has been driven by a desire to reduce costs and development times, but is also due to the impressive power density and efficiency numbers achieved by today's commercial DC-DC converters. However, the PMAD designer quickly learns of the hidden "costs" of using COTS converters. The most significant cost is the required addition of external input filters to meet strict electromagnetic interference (MIAMI) requirements for space systems. In fact, the high power density numbers achieved by the commercial manufacturers are greatly due to the lack of necessary input filters included in the COTS module. The NASA Glenn Research Center is currently pursuing a digital control technology that addresses this problem with modular DC-DC converters. This paper presents the digital control technologies that have been developed to greatly reduce the input filter requirements for paralleled, modular DC-DC converters. Initial test result show that the input filter's inductor size was reduced by 75 percent, and the capacitor size was reduced by 94 percent while maintaining the same power quality specifications.

  19. Preliminary Design Study of Medium Sized Gas Cooled Fast Reactor with Natural Uranium as Fuel Cycle Input

    NASA Astrophysics Data System (ADS)

    Meriyanti, Su'ud, Zaki; Rijal, K.; Zuhair, Ferhat, A.; Sekimoto, H.

    2010-06-01

    In this study a fesibility design study of medium sized (1000 MWt) gas cooled fast reactors which can utilize natural uranium as fuel cycle input has been conducted. Gas Cooled Fast Reactor (GFR) is among six types of Generation IV Nuclear Power Plants. GFR with its hard neuron spectrum is superior for closed fuel cycle, and its ability to be operated in high temperature (850° C) makes various options of utilizations become possible. To obtain the capability of consuming natural uranium as fuel cycle input, modified CANDLE burn-up scheme[1-6] is adopted this GFR system by dividing the core into 10 parts of equal volume axially. Due to the limitation of thermal hydraulic aspects, the average power density of the proposed design is selected about 70 W/cc. As an optimization results, a design of 1000 MWt reactors which can be operated 10 years without refueling and fuel shuffling and just need natural uranium as fuel cycle input is discussed. The average discharge burn-up is about 280 GWd/ton HM. Enough margin for criticallity was obtained for this reactor.

  20. Circuit for high resolution decoding of multi-anode microchannel array detectors

    NASA Technical Reports Server (NTRS)

    Kasle, David B. (Inventor)

    1995-01-01

    A circuit for high resolution decoding of multi-anode microchannel array detectors consisting of input registers accepting transient inputs from the anode array; anode encoding logic circuits connected to the input registers; midpoint pipeline registers connected to the anode encoding logic circuits; and pixel decoding logic circuits connected to the midpoint pipeline registers is described. A high resolution algorithm circuit operates in parallel with the pixel decoding logic circuit and computes a high resolution least significant bit to enhance the multianode microchannel array detector's spatial resolution by halving the pixel size and doubling the number of pixels in each axis of the anode array. A multiplexer is connected to the pixel decoding logic circuit and allows a user selectable pixel address output according to the actual multi-anode microchannel array detector anode array size. An output register concatenates the high resolution least significant bit onto the standard ten bit pixel address location to provide an eleven bit pixel address, and also stores the full eleven bit pixel address. A timing and control state machine is connected to the input registers, the anode encoding logic circuits, and the output register for managing the overall operation of the circuit.

  1. Visual stimuli that elicit appetitive behaviors in three morphologically distinct species of praying mantis.

    PubMed

    Prete, Frederick R; Komito, Justin L; Dominguez, Salina; Svenson, Gavin; López, LeoLin Y; Guillen, Alex; Bogdanivich, Nicole

    2011-09-01

    We assessed the differences in appetitive responses to visual stimuli by three species of praying mantis (Insecta: Mantodea), Tenodera aridifolia sinensis, Mantis religiosa, and Cilnia humeralis. Tethered, adult females watched computer generated stimuli (erratically moving disks or linearly moving rectangles) that varied along predetermined parameters. Three responses were scored: tracking, approaching, and striking. Threshold stimulus size (diameter) for tracking and striking at disks ranged from 3.5 deg (C. humeralis) to 7.8 deg (M. religiosa), and from 3.3 deg (C. humeralis) to 11.7 deg (M. religiosa), respectively. Unlike the other species which struck at disks as large as 44 deg, T. a. sinensis displayed a preference for 14 deg disks. Disks moving at 143 deg/s were preferred by all species. M. religiosa exhibited the most approaching behavior, and with T. a. sinensis distinguished between rectangular stimuli moving parallel versus perpendicular to their long axes. C. humeralis did not make this distinction. Stimulus sizes that elicited the target behaviors were not related to mantis size. However, differences in compound eye morphology may be related to species differences: C. humeralis' eyes are farthest apart, and it has an apparently narrower binocular visual field which may affect retinal inputs to movement-sensitive visual interneurons.

  2. Evaluation of touch-sensitive screen tablet terminal button size and spacing accounting for effect of fingertip contact angle.

    PubMed

    Nishimura, T; Doi, K; Fujimoto, H

    2015-08-01

    Touch-sensitive screen terminals enabling intuitive operation are used as input interfaces in a wide range of fields. Tablet terminals are one of the most common devices with a touch-sensitive screen. They have a feature of good portability, enabling use under various conditions. On the other hand, they require a GUI designed to prevent decrease of usability under various conditions. For example, the angle of fingertip contact with the display changes according to finger posture during operation and how the case is held. When a human fingertip makes contact with an object, the contact area between the fingertip and contact object increases or decreases as the contact angle changes. A touch-sensitive screen detects positions using the change in capacitance of the area touched by the fingertip; hence, differences in contact area between the touch-sensitive screen and fingertip resulting from different forefinger angles during operation could possibly affect operability. However, this effect has never been studied. We therefore conducted an experiment to investigate the relationship between size/spacing and operability, while taking the effect of fingertip contact angle into account. As a result, we have been able to specify the button size and spacing conditions that enable accurate and fast operation regardless of the forefinger contact angle.

  3. Differential contribution of soil biota groups to plant litter decomposition as mediated by soil use

    PubMed Central

    Falco, Liliana B.; Sandler, Rosana V.; Coviella, Carlos E.

    2015-01-01

    Plant decomposition is dependant on the activity of the soil biota and its interactions with climate, soil properties, and plant residue inputs. This work assessed the roles of different groups of the soil biota on litter decomposition, and the way they are modulated by soil use. Litterbags of different mesh sizes for the selective exclusion of soil fauna by size (macro, meso, and microfauna) were filled with standardized dried leaves and placed on the same soil under different use intensities: naturalized grasslands, recent agriculture, and intensive agriculture fields. During five months, litterbags of each mesh size were collected once a month per system with five replicates. The remaining mass was measured and decomposition rates calculated. Differences were found for the different biota groups, and they were dependant on soil use. Within systems, the results show that in the naturalized grasslands, the macrofauna had the highest contribution to decomposition. In the recent agricultural system it was the combined activity of the macro- and mesofauna, and in the intensive agricultural use it was the mesofauna activity. These results underscore the relative importance and activity of the different groups of the edaphic biota and the effects of different soil uses on soil biota activity. PMID:25780777

  4. Fuzzy rule based estimation of agricultural diffuse pollution concentration in streams.

    PubMed

    Singh, Raj Mohan

    2008-04-01

    Outflow from the agricultural fields carries diffuse pollutants like nutrients, pesticides, herbicides etc. and transports the pollutants into the nearby streams. It is a matter of serious concern for water managers and environmental researchers. The application of chemicals in the agricultural fields, and transport of these chemicals into streams are uncertain that cause complexity in reliable stream quality predictions. The chemical characteristics of applied chemical, percentage of area under the chemical application etc. are some of the main inputs that cause pollution concentration as output in streams. Each of these inputs and outputs may contain measurement errors. Fuzzy rule based model based on fuzzy sets suits to address uncertainties in inputs by incorporating overlapping membership functions for each of inputs even for limited data availability situations. In this study, the property of fuzzy sets to address the uncertainty in input-output relationship is utilized to obtain the estimate of concentrations of a herbicide, atrazine, in a stream. The data of White river basin, a part of the Mississippi river system, is used for developing the fuzzy rule based models. The performance of the developed methodology is found encouraging.

  5. Nightside electron precipitation at Mars: Geographic variability and dependence on solar wind conditions

    NASA Astrophysics Data System (ADS)

    Lillis, Robert J.; Brain, David A.

    2013-06-01

    Electron precipitation is usually the dominant source of energy input to the nightside Martian atmosphere, with consequences for ionospheric densities, chemistry, electrodynamics, communications, and navigation. We examine downward-traveling superthermal electron flux on the Martian nightside from May 1999 to November 2006 at 400 km altitude and 2 A.M. local time. Electron precipitation is geographically organized by crustal magnetic field strength and elevation angle, with higher fluxes occurring in regions of weak and/or primarily vertical crustal fields, while stronger and more horizontal fields retard electron access to the atmosphere. We investigate how these crustal field-organized precipitation patterns vary with proxies for solar wind (SW) pressure and interplanetary magnetic field (IMF) direction. Generally, higher precipitating fluxes accompany higher SW pressures. Specifically, we identify four characteristic spectral behaviors: (1) "stable" regions where fluxes increase mildly with SW pressure, (2) "high-flux" regions where accelerated (peaked) spectra are more common and where fluxes below ~500 eV are largely independent of SW pressure, (3) permanent plasma voids, and (4) intermittent plasma voids where fluxes depend strongly on SW pressure. The locations, sizes, shapes, and absence/existence of these plasma voids vary significantly with solar wind pressure proxy and moderately with IMF proxy direction; average precipitating fluxes are 40% lower in strong crustal field regions and 15% lower globally for approximately southwest proxy directions compared with approximately northeast directions. This variation of the strength and geographic pattern of the shielding effect of Mars' crustal fields exemplifies the complex interaction between those fields and the solar wind.

  6. Assessing the effect of elevated carbon dioxide on soil carbon: a comparison of four meta-analyses.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hungate, B. A.; van Groenigen, K.; Six, J.

    2009-08-01

    Soil is the largest reservoir of organic carbon (C) in the terrestrial biosphere and soil C has a relatively long mean residence time. Rising atmospheric carbon dioxide (CO{sub 2}) concentrations generally increase plant growth and C input to soil, suggesting that soil might help mitigate atmospheric CO{sub 2} rise and global warming. But to what extent mitigation will occur is unclear. The large size of the soil C pool not only makes it a potential buffer against rising atmospheric CO{sub 2}, but also makes it difficult to measure changes amid the existing background. Meta-analysis is one tool that can overcomemore » the limited power of single studies. Four recent meta-analyses addressed this issue but reached somewhat different conclusions about the effect of elevated CO{sub 2} on soil C accumulation, especially regarding the role of nitrogen (N) inputs. Here, we assess the extent of differences between these conclusions and propose a new analysis of the data. The four meta-analyses included different studies, derived different effect size estimates from common studies, used different weighting functions and metrics of effect size, and used different approaches to address nonindependence of effect sizes. Although all factors influenced the mean effect size estimates and subsequent inferences, the approach to independence had the largest influence. We recommend that meta-analysts critically assess and report choices about effect size metrics and weighting functions, and criteria for study selection and independence. Such decisions need to be justified carefully because they affect the basis for inference. Our new analysis, with a combined data set, confirms that the effect of elevated CO{sub 2} on net soil C accumulation increases with the addition of N fertilizers. Although the effect at low N inputs was not significant, statistical power to detect biogeochemically important effect sizes at low N is limited, even with meta-analysis, suggesting the continued need for long-term experiments.« less

  7. Frequency Response Calculations of Input Characteristics of Cavity-Backed Aperture Antennas Using AWE with Hybrid FEM/MoM Technique

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.; Deshpande, M. D.

    1997-01-01

    Application of Asymptotic Waveform Evaluation (AWE) is presented in conjunction with a hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique to calculate the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FEM/MoM technique is used to form an integro-partial-differential equation to compute the electric field distribution of the cavity-backed aperture antenna. The electric field, thus obtained, is expanded in a Taylor series around the frequency of interest. The coefficients of 'Taylor series (called 'moments') are obtained using the frequency derivatives of the integro-partial-differential Equation formed by the hybrid FEM/MoM technique. Using the moments, the electric field in the cavity is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency band. Numerical results for an open coaxial line, probe fed cavity, and cavity-backed microstrip patch antennas are presented. Good agreement between AWE and the exact solution over the frequency range is observed.

  8. Application of Model Based Parameter Estimation for Fast Frequency Response Calculations of Input Characteristics of Cavity-Backed Aperture Antennas Using Hybrid FEM/MoM Technique

    NASA Technical Reports Server (NTRS)

    Reddy C. J.

    1998-01-01

    Model Based Parameter Estimation (MBPE) is presented in conjunction with the hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique for fast computation of the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FENI/MoM technique is used to form an integro-partial- differential equation to compute the electric field distribution of a cavity-backed aperture antenna. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency derivatives of the integro-partial-differential equation formed by the hybrid FEM/ MoM technique. Using the rational function approximation, the electric field is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency range. Numerical results for an open coaxial line, probe-fed coaxial cavity and cavity-backed microstrip patch antennas are presented. Good agreement between MBPE and the solutions over individual frequencies is observed.

  9. Computer-Drawn Field Lines and Potential Surfaces for a Wide Range of Field Configurations

    ERIC Educational Resources Information Center

    Brandt, Siegmund; Schneider, Hermann

    1976-01-01

    Describes a computer program that computes field lines and equipotential surfaces for a wide range of field configurations. Presents the mathematical technique and details of the program, the input data, and different modes of graphical representation. (MLH)

  10. Syringe-Injectable Electronics with a Plug-and-Play Input/Output Interface.

    PubMed

    Schuhmann, Thomas G; Yao, Jun; Hong, Guosong; Fu, Tian-Ming; Lieber, Charles M

    2017-09-13

    Syringe-injectable mesh electronics represent a new paradigm for brain science and neural prosthetics by virtue of the stable seamless integration of the electronics with neural tissues, a consequence of the macroporous mesh electronics structure with all size features similar to or less than individual neurons and tissue-like flexibility. These same properties, however, make input/output (I/O) connection to measurement electronics challenging, and work to-date has required methods that could be difficult to implement by the life sciences community. Here we present a new syringe-injectable mesh electronics design with plug-and-play I/O interfacing that is rapid, scalable, and user-friendly to nonexperts. The basic design tapers the ultraflexible mesh electronics to a narrow stem that routes all of the device/electrode interconnects to I/O pads that are inserted into a standard zero insertion force (ZIF) connector. Studies show that the entire plug-and-play mesh electronics can be delivered through capillary needles with precise targeting using microliter-scale injection volumes similar to the standard mesh electronics design. Electrical characterization of mesh electronics containing platinum (Pt) electrodes and silicon (Si) nanowire field-effect transistors (NW-FETs) demonstrates the ability to interface arbitrary devices with a contact resistance of only 3 Ω. Finally, in vivo injection into mice required only minutes for I/O connection and yielded expected local field potential (LFP) recordings from a compact head-stage compatible with chronic studies. Our results substantially lower barriers for use by new investigators and open the door for increasingly sophisticated and multifunctional mesh electronics designs for both basic and translational studies.

  11. A four mirror anastigmat collimator design for optical payload calibration

    NASA Astrophysics Data System (ADS)

    Rolt, Stephen; Calcines, Ariadna; Lomanowski, Bart A.; Bramall, David G.

    2016-07-01

    We present here a four mirror anastigmatic optical collimator design intended for the calibration of an earth observation satellite instrument. Specifically, the collimator is to be applied to the ground based calibration of the Sentinel-4/UVN instrument. This imaging spectrometer instrument itself is expected to be deployed in 2019 in a geostationary orbit and will make spatially resolved spectroscopic measurements of atmospheric contaminants. The collimator is to be deployed during the ground based calibration only and does not form part of the instrument itself. The purpose of the collimator is to provide collimated light within the two instrument passbands in the UV-VIS (305 - 500 nm) and the NIR (750 - 775 nm). Moreover, that collimated light will be derived from a variety of slit like objects located at the input focal (object) plane of the collimator which is uniformly illuminated by a number of light sources. The collimator must relay these objects with exceptionally high fidelity. To this end, the wavefront error of the collimator should be less than 30 nm rms across the collimator field of view. This field is determined by the largest object which is a large rectangular slit, 4.4° x 0.25°. Other important considerations affecting the optical design are the requirements for input telecentricity and the size (85 mm) and location (2500 mm `back focal distance') of the exit pupil. The design of the instrument against these basic requirements is discussed in detail. In addition an analysis of the straylight and tolerancing is presented in detail.

  12. Magnetic current sensor

    NASA Technical Reports Server (NTRS)

    Black, Jr., William C. (Inventor); Hermann, Theodore M. (Inventor)

    1998-01-01

    A current determiner having an output at which representations of input currents are provided having an input conductor for the input current and a current sensor supported on a substrate electrically isolated from one another but with the sensor positioned in the magnetic fields arising about the input conductor due to any input currents. The sensor extends along the substrate in a direction primarily perpendicular to the extent of the input conductor and is formed of at least a pair of thin-film ferromagnetic layers separated by a non-magnetic conductive layer. The sensor can be electrically connected to a electronic circuitry formed in the substrate including a nonlinearity adaptation circuit to provide representations of the input currents of increased accuracy despite nonlinearities in the current sensor, and can include further current sensors in bridge circuits.

  13. Focal ratio degradation in lightly fused hexabundles

    NASA Astrophysics Data System (ADS)

    Bryant, J. J.; Bland-Hawthorn, J.; Fogarty, L. M. R.; Lawrence, J. S.; Croom, S. M.

    2014-02-01

    We are now moving into an era where multi-object wide-field surveys, which traditionally use single fibres to observe many targets simultaneously, can exploit compact integral field units (IFUs) in place of single fibres. Current multi-object integral field instruments such as Sydney-AAO Multi-object Integral field spectrograph have driven the development of new imaging fibre bundles (hexabundles) for multi-object spectrographs. We have characterized the performance of hexabundles with different cladding thicknesses and compared them to that of the same type of bare fibre, across the range of fill fractions and input f-ratios likely in an IFU instrument. Hexabundles with 7-cores and 61-cores were tested for focal ratio degradation (FRD), throughput and cross-talk when fed with inputs from F/3.4 to >F/8. The five 7-core bundles have cladding thickness ranging from 1 to 8 μm, and the 61-core bundles have 5 μm cladding. As expected, the FRD improves as the input focal ratio decreases. We find that the FRD and throughput of the cores in the hexabundles match the performance of single fibres of the same material at low input f-ratios. The performance results presented can be used to set a limit on the f-ratio of a system based on the maximum loss allowable for a planned instrument. Our results confirm that hexabundles are a successful alternative for fibre imaging devices for multi-object spectroscopy on wide-field telescopes and have prompted further development of hexabundle designs with hexagonal packing and square cores.

  14. Measurement of the modulation transfer function of x-ray scintillators via heterodyne speckles (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Manfredda, Michele; Giglio, Marzio

    2016-09-01

    The approach can be seen as the optical transposition of what is done in electronics, when a system is fed with a white noise (the input signal autocorrelation is a Diract-delta) and the autocorrelation of the the output signal is then taken, thus yielding the Point Spread Function (PSF) of the system (which is the Fourier Transform of the MTF). In the realm of optics, the tricky task consists in the generation and handling of such a suitable random noise, which must be produced via scattering. Ideally, pure 2D white noise (random superposition of sinusoidal intensity modulation at all spatial frequencies in all the diractions) would be produced by ideal point-like scatterers illuminated with completely coherent radiation: interference between scattered waves would generate high-frequency fringes, realizing the sought noise signal. Practically, limited scatterer size and limited coherence properties of radiation introduce a limitation in the spatial bandwidth of the illuminating field. Whereas information about particle-size effect can be promptly obtained from the form factor of the sample used, which is very well known in the case of spherical particles, the information about beam coherence, in general, is usally not known with adequate accuracy, especially at the x-ray wavelengths. In the particular configuration used, speckles are produced by interfering the scattered waves with the strong transmitted beam, (heterodyne speckles), contrarily to the very common case where speckles are produced by the mutual interference between scattered waves (without any transmitted beam acting as local oscillator) (homodyne speckles). In the end the use of an heterodyne speckle field, thanks to its self-referencing scheme, allows to gather, at a fixed distance, response curves spanning a wide range of wavevectors. By crossing the info from curves acquired at few distances (e.g. 2-3) , it is possible to experimentally separate the contribution of spurious effects (such as limited coherence), in order to identify the spectral component, due to the response of the test system, which is the responsible of the broadening of the optical input signal.

  15. Three dimensional radiation fields in free electron lasers using Lienard-Wiechert fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elias, L.R.; Gallardo, J.

    1981-10-28

    In a free electron laser a relativistic electron beam is bunched under the action of the ponderomotive potential and is forced to radiate in close phase with the input wave. Until recently, most theories of the FEL have dealt solely with electron beams of infinite transverse dimension radiating only one-dimensional E.M. waves (plane waves). Although these theories describe accurately the dynamics of the electrons during the FEL interaction process, neither the three dimensional nature of the radiated fields nor its non-monochromatic features can be properly studied by them. As a result of this, very important practical issues such as themore » gain per gaussian-spherical optical mode in a free electron laser have not been well addressed, except through a one dimensional field model in which a filling factor describes crudely the coupling of the FEL induced field to the input field.« less

  16. SU-E-T-586: Field Size Dependence of Output Factor for Uniform Scanning Proton Beams: A Comparison of TPS Calculation, Measurement and Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Y; Singh, H; Islam, M

    2014-06-01

    Purpose: Output dependence on field size for uniform scanning beams, and the accuracy of treatment planning system (TPS) calculation are not well studied. The purpose of this work is to investigate the dependence of output on field size for uniform scanning beams and compare it among TPS calculation, measurements and Monte Carlo simulations. Methods: Field size dependence was studied using various field sizes between 2.5 cm diameter to 10 cm diameter. The field size factor was studied for a number of proton range and modulation combinations based on output at the center of spread out Bragg peak normalized to amore » 10 cm diameter field. Three methods were used and compared in this study: 1) TPS calculation, 2) ionization chamber measurement, and 3) Monte Carlos simulation. The XiO TPS (Electa, St. Louis) was used to calculate the output factor using a pencil beam algorithm; a pinpoint ionization chamber was used for measurements; and the Fluka code was used for Monte Carlo simulations. Results: The field size factor varied with proton beam parameters, such as range, modulation, and calibration depth, and could decrease over 10% from a 10 cm to 3 cm diameter field for a large range proton beam. The XiO TPS predicted the field size factor relatively well at large field size, but could differ from measurements by 5% or more for small field and large range beams. Monte Carlo simulations predicted the field size factor within 1.5% of measurements. Conclusion: Output factor can vary largely with field size, and needs to be accounted for accurate proton beam delivery. This is especially important for small field beams such as in stereotactic proton therapy, where the field size dependence is large and TPS calculation is inaccurate. Measurements or Monte Carlo simulations are recommended for output determination for such cases.« less

  17. Low noise tuned amplifier

    NASA Technical Reports Server (NTRS)

    Kleinberg, L. L. (Inventor)

    1984-01-01

    A bandpass amplifier employing a field effect transistor amplifier first stage is described with a resistive load either a.c. or directly coupled to the non-inverting input of an operational amplifier second stage which is loaded in a Wien Bridge configuration. The bandpass amplifier may be operated with a signal injected into the gate terminal of the field effect transistor and the signal output taken from the output terminal of the operational amplifier. The operational amplifier stage appears as an inductive reactance, capacitive reactance and negative resistance at the non-inverting input of the operational amplifier, all of which appear in parallel with the resistive load of the field effect transistor.

  18. Inverse Diffusion Curves Using Shape Optimization.

    PubMed

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  19. Ring rolling process simulation for microstructure optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.

  20. A Markov model for the temporal dynamics of balanced random networks of finite size

    PubMed Central

    Lagzi, Fereshteh; Rotter, Stefan

    2014-01-01

    The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks. PMID:25520644

  1. 40 CFR 52.129 - Review of new sources and modifications.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... g/100 stdm3); has a heat input of not more than 1 MBtu/h (250 Mg-cal/h) and burns only distillate oil; or has a heat input of not more than 350,000 Btu/h (88.2 Mg-cal/h) and burns any other fuel. (iv... the source to be provided with: (i) Sampling ports of a size, number, and location as the...

  2. 40 CFR 52.129 - Review of new sources and modifications.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... g/100 stdm3); has a heat input of not more than 1 MBtu/h (250 Mg-cal/h) and burns only distillate oil; or has a heat input of not more than 350,000 Btu/h (88.2 Mg-cal/h) and burns any other fuel. (iv... the source to be provided with: (i) Sampling ports of a size, number, and location as the...

  3. Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.

    Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). Wemore » explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.« less

  4. A Simulation Model Of A Picture Archival And Communication System

    NASA Astrophysics Data System (ADS)

    D'Silva, Vijay; Perros, Harry; Stockbridge, Chris

    1988-06-01

    A PACS architecture was simulated to quantify its performance. The model consisted of reading stations, acquisition nodes, communication links, a database management system, and a storage system consisting of magnetic and optical disks. Two levels of storage were simulated, a high-speed magnetic disk system for short term storage, and optical disk jukeboxes for long term storage. The communications link was a single bus via which image data were requested and delivered. Real input data to the simulation model were obtained from surveys of radiology procedures (Bowman Gray School of Medicine). From these the following inputs were calculated: - the size of short term storage necessary - the amount of long term storage required - the frequency of access of each store, and - the distribution of the number of films requested per diagnosis. The performance measures obtained were - the mean retrieval time for an image, - mean queue lengths, and - the utilization of each device. Parametric analysis was done for - the bus speed, - the packet size for the communications link, - the record size on the magnetic disk, - compression ratio, - influx of new images, - DBMS time, and - diagnosis think times. Plots give the optimum values for those values of input speed and device performance which are sufficient to achieve subsecond image retrieval times

  5. A computer program for sample size computations for banding studies

    USGS Publications Warehouse

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  6. Display and device size effects on the usability of mini-notebooks (netbooks)/ultraportables as small form-factor Mobile PCs.

    PubMed

    Lai, Chih-Chun; Wu, Chih-Fu

    2014-07-01

    A balance between portability and usability made the 10.1″ diagonal screens popular in the Mobile PC market (e.g., 10.1″ mini-notebooks/netbooks, convertible/hybrid ultraportables); yet no academic research rationalizes this phenomenon. This study investigated the size effects of display and input devices of 4 mini-notebooks (netbooks) ranged in size on their performances in 2 simple and 3 complex applied tasks. It seemed that the closer the display and/or input devices (touchpad/touchscreen/keyboard) sizes to those sizes of a generic notebook, the shorter the operation times (there was no certain phenomenon for the error rates). With non-significant differences, the 10.1″ and 8.9″ mini-notebooks (netbooks) were as fast as the 11.6″ one in almost all the tasks, except for the 8.9″ one in the typing tasks. The 11.6″ mini-notebook (netbook) was most preferred; while the difference in the satisfactions was not significant between the 10.1″ and 11.6″ ones but between the 7″ and 11.6″ ones. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  7. Large space structures control algorithm characterization

    NASA Technical Reports Server (NTRS)

    Fogel, E.

    1983-01-01

    Feedback control algorithms are developed for sensor/actuator pairs on large space systems. These algorithms have been sized in terms of (1) floating point operation (FLOP) demands; (2) storage for variables; and (3) input/output data flow. FLOP sizing (per control cycle) was done as a function of the number of control states and the number of sensor/actuator pairs. Storage for variables and I/O sizing was done for specific structure examples.

  8. Conterminous United States Crop Field Size Quantification from Multi-temporal Landsat Data

    NASA Astrophysics Data System (ADS)

    Yan, L.; Roy, D. P.

    2015-12-01

    Field sizes are indicative of the degree of agricultural capital investment, mechanization and labor intensity. Information on the size of fields is needed to plan and understand these factors, and may help the allocation of agricultural resources. The Landsat satellites provide the longest global land observation record and their data have potential for monitoring field sizes. A recently published automated methodology to extract agricultural crop fields was refined and applied to 30 m weekly Landsat 5 and 7 time series of year 2010 in the range of all the conterminous United States (CONUS). For the first time, spatially explicit CONUS field size maps and derived information are presented. A total of 4.18 million fields were extracted with mean and median field sizes of 0.193 km2 and 0.278 km2, respectively. There were discernable patterns between field size and the majority crop type as defined by the United States Department of Agriculture (USDA) cropland data layer (CDL) classification. In general, larger field sizes occurred where a greater proportion of the land was dedicated to agriculture, predominantly in the U.S. Wheat and Corn belts, and in regions of irrigated agriculture. The CONUS field size histogram was skewed, and 50% of the extracted fields had sizes greater than or smaller than 0.361 km2, and there were four distinct peaks that corresponded closely to sizes equivalent to fields with 0.25 × 0.25 mile, 0.25 × 0.5 mile, 0.5 × 0.5 mile, and 0.5 × 1 mile side dimensions. The results of validation by comparison with independent field boundaries at 48 subsets selected across the 16 states with the greatest harvested cropland area are summarized. The presentation concludes with a discussion of the implications of this NASA funded research and challenges for field size extraction from global coverage long term satellite data.

  9. On the paleo-magnetospheres of Earth and Mars

    NASA Astrophysics Data System (ADS)

    Scherf, Manuel; Khodachenko, Maxim; Alexeev, Igor; Belenkaya, Elena; Blokhina, Marina; Johnstone, Colin; Tarduno, John; Lammer, Helmut; Tu, Lin; Guedel, Manuel

    2017-04-01

    The intrinsic magnetic field of a terrestrial planet is considered to be an important factor for the evolution of terrestrial atmospheres. This is in particular relevant for early stages of the solar system, in which the solar wind as well as the EUV flux from the young Sun were significantly stronger than at present-day. We therefore will present simulations of the paleo-magnetospheres of ancient Earth and Mars, which were performed for ˜4.1 billion years ago, i.e. the Earth's late Hadean eon and Mars' early Noachian. These simulations were performed with specifically adapted versions of the Paraboloid Magnetospheric Model (PMM) of the Skobeltsyn Institute of Nuclear Physics of the Moscow State University, which serves as ISO-standard for the Earth's magnetic field (see e.g. Alexeev et al., 2003). One of the input parameters into our model is the ancient solar wind pressure. This is derived from a newly developed solar/stellar wind evolution model, which is strongly dependent on the initial rotation rate of the early Sun (Johnstone et al., 2015). Another input parameter is the ancient magnetic dipole field. In case of Earth this is derived from measurements of the paleomagnetic field strength by Tarduno et al., 2015. These data from zircons are varying between 0.12 and 1.0 of today's magnetic field strength. For Mars the ancient magnetic field is derived from the remanent magnetization in the Martian crust as measured by the Mars Global Surveyor MAG/ER experiment. These data together with dynamo theory are indicating an ancient Martian dipole field strength in the range of 0.1 to 1.0 of the present-day terrestrial dipole field. For the Earth our simulations show that the paleo-magnetosphere during the late Hadean eon was significantly smaller than today, with a standoff-distance rs ranging from ˜3.4 to 8 Re, depending on the input parameters. These results also have implications for the early terrestrial atmosphere. Due to the significantly higher EUV flux, the exobase of a nitrogen dominated atmosphere would most probably have been extended above the magnetopause, leading to enhanced atmospheric erosion, whereas a CO2-dominated atmosphere would have prevented atmospheric loss in such a scenario. Our simulations also show that the Martian paleo-magnetosphere during the early Noachian must have been comparable in size to the terrestrial paleo-magnetosphere, hence a CO2-rich atmosphere should have been protected by the magnetic field from rapid atmospheric erosion until the cessation of the Martian dipole field ˜4.0 billion years ago. Finally, our results favor the idea that the young Sun must have been a slow to moderate rotator. The solar wind and EUV flux from a fast rotating Sun would have been so intense, that most probably the ancient atmospheres of Mars and Earth would not have survived. Acknowledgments. The authors acknowledge the support of the FWF NFN project "Pathways to Habitability: From Disks to Active Stars, Planets and Life", in particular its related sub-projects S11604-N16, S11606-N16 and S11607-N16. This presentation is supported by the Austrian Science Fund (FWF) and the US NSF (EAR1015269 to JAT).

  10. High-Throughput, Adaptive FFT Architecture for FPGA-Based Spaceborne Data Processors

    NASA Technical Reports Server (NTRS)

    NguyenKobayashi, Kayla; Zheng, Jason X.; He, Yutao; Shah, Biren N.

    2011-01-01

    Exponential growth in microelectronics technology such as field-programmable gate arrays (FPGAs) has enabled high-performance spaceborne instruments with increasing onboard data processing capabilities. As a commonly used digital signal processing (DSP) building block, fast Fourier transform (FFT) has been of great interest in onboard data processing applications, which needs to strike a reasonable balance between high-performance (throughput, block size, etc.) and low resource usage (power, silicon footprint, etc.). It is also desirable to be designed so that a single design can be reused and adapted into instruments with different requirements. The Multi-Pass Wide Kernel FFT (MPWK-FFT) architecture was developed, in which the high-throughput benefits of the parallel FFT structure and the low resource usage of Singleton s single butterfly method is exploited. The result is a wide-kernel, multipass, adaptive FFT architecture. The 32K-point MPWK-FFT architecture includes 32 radix-2 butterflies, 64 FIFOs to store the real inputs, 64 FIFOs to store the imaginary inputs, complex twiddle factor storage, and FIFO logic to route the outputs to the correct FIFO. The inputs are stored in sequential fashion into the FIFOs, and the outputs of each butterfly are sequentially written first into the even FIFO, then the odd FIFO. Because of the order of the outputs written into the FIFOs, the depth of the even FIFOs, which are 768 each, are 1.5 times larger than the odd FIFOs, which are 512 each. The total memory needed for data storage, assuming that each sample is 36 bits, is 2.95 Mbits. The twiddle factors are stored in internal ROM inside the FPGA for fast access time. The total memory size to store the twiddle factors is 589.9Kbits. This FFT structure combines the benefits of high throughput from the parallel FFT kernels and low resource usage from the multi-pass FFT kernels with desired adaptability. Space instrument missions that need onboard FFT capabilities such as the proposed DESDynl, SWOT (Surface Water Ocean Topography), and Europa sounding radar missions would greatly benefit from this technology with significant reductions in non-recurring cost and risk.

  11. Weld geometry strength effect in 2219-T87 aluminum

    NASA Technical Reports Server (NTRS)

    Nunes, A. C., Jr.; Novak, H. L.; Mcilwain, M. C.

    1981-01-01

    A theory of the effect of geometry on the mechanical properties of a butt weld joint is worked out based upon the soft interlayer weld model. Tensile tests of 45 TIG butt welds and 6 EB beads-on-plate in 1/4-in. 2219-T87 aluminum plate made under a wide range of heat sink and power input conditions are analyzed using this theory. The analysis indicates that purely geometrical effects dominate in determining variations in weld joint strength with heat sink and power input. Variations in weld dimensions with cooling rate are significant as well as with power input. Weld size is suggested as a better indicator of the condition of a weld joint than energy input.

  12. Record of late Pleistocene glaciation and deglaciation in the southern Cascade Range. I. Petrological evidence from lacustrine sediment in Upper Klamath Lake, southern Oregon

    USGS Publications Warehouse

    Reynolds, R.L.; Rosenbaum, J.G.; Rapp, J.; Kerwin, M.W.; Bradbury, J.P.; Colman, S.; Adam, D.

    2004-01-01

    Petrological and textural properties of lacustrine sediments from Upper Klamath Lake, Oregon, reflect changing input volumes of glacial flour and thus reveal a detailed glacial history for the southern Cascade Range between about 37 and 15 ka. Magnetic properties vary as a result of mixing different amounts of the highly magnetic, glacially generated detritus with less magnetic, more weathered detritus derived from unglaciated parts of the large catchment. Evidence that the magnetic properties record glacial flour input is based mainly on the strong correlation between bulk sediment particle size and parameters that measure the magnetite content and magnetic mineral freshness. High magnetization corresponds to relatively fine particle size and lower magnetization to coarser particle size. This relation is not found in the Buck Lake core in a nearby, unglaciated catchment. Angular silt-sized volcanic rock fragments containing unaltered magnetite dominate the magnetic fraction in the late Pleistocene sediments but are absent in younger, low magnetization sediments. The finer grained, highly magnetic sediments contain high proportions of planktic diatoms indicative of cold, oligotrophic limnic conditions. Sediment with lower magnetite content contains populations of diatoms indicative of warmer, eutrophic limnic conditions. During the latter part of oxygen isotope stage 3 (about 37-25 ka), the magnetic properties record millennial-scale variations in glacial-flour content. The input of glacial flour was uniformly high during the Last Glacial Maximum, between about 21 and 16 ka. At about 16 ka, magnetite input, both absolute and relative to hematite, decreased abruptly, reflecting a rapid decline in glacially derived detritus. The decrease in magnetite transport into the lake preceded declines in pollen from both grass and sagebrush. A more gradual decrease in heavy mineral content over this interval records sediment starvation with the growth of marshes at the margins of the lake and dilution of detrital material by biogenic silica and other organic matter.

  13. The Non-Lemniscal Auditory Cortex in Ferrets: Convergence of Corticotectal Inputs in the Superior Colliculus

    PubMed Central

    Bajo, Victoria M.; Nodal, Fernando R.; Bizley, Jennifer K.; King, Andrew J.

    2010-01-01

    Descending cortical inputs to the superior colliculus (SC) contribute to the unisensory response properties of the neurons found there and are critical for multisensory integration. However, little is known about the relative contribution of different auditory cortical areas to this projection or the distribution of their terminals in the SC. We characterized this projection in the ferret by injecting tracers in the SC and auditory cortex. Large pyramidal neurons were labeled in layer V of different parts of the ectosylvian gyrus after tracer injections in the SC. Those cells were most numerous in the anterior ectosylvian gyrus (AEG), and particularly in the anterior ventral field, which receives both auditory and visual inputs. Labeling was also found in the posterior ectosylvian gyrus (PEG), predominantly in the tonotopically organized posterior suprasylvian field. Profuse anterograde labeling was present in the SC following tracer injections at the site of acoustically responsive neurons in the AEG or PEG, with terminal fields being both more prominent and clustered for inputs originating from the AEG. Terminals from both cortical areas were located throughout the intermediate and deep layers, but were most concentrated in the posterior half of the SC, where peripheral stimulus locations are represented. No inputs were identified from primary auditory cortical areas, although some labeling was found in the surrounding sulci. Our findings suggest that higher level auditory cortical areas, including those involved in multisensory processing, may modulate SC function via their projections into its deeper layers. PMID:20640247

  14. Non-Gaussian quantum states generation and robust quantum non-Gaussianity via squeezing field

    NASA Astrophysics Data System (ADS)

    Tang, Xu-Bing; Gao, Fang; Wang, Yao-Xiong; Kuang, Sen; Shuang, Feng

    2015-03-01

    Recent studies show that quantum non-Gaussian states or using non-Gaussian operations can improve entanglement distillation, quantum swapping, teleportation, and cloning. In this work, employing a strategy of non-Gaussian operations (namely subtracting and adding a single photon), we propose a scheme to generate non-Gaussian quantum states named single-photon-added and -subtracted coherent (SPASC) superposition states by implementing Bell measurements, and then investigate the corresponding nonclassical features. By squeezed the input field, we demonstrate that robustness of non-Gaussianity can be improved. Controllable phase space distribution offers the possibility to approximately generate a displaced coherent superposition states (DCSS). The fidelity can reach up to F ≥ 0.98 and F ≥ 0.90 for size of amplitude z = 1.53 and 2.36, respectively. Project supported by the National Natural Science Foundation of China (Grant Nos. 61203061 and 61074052), the Outstanding Young Talent Foundation of Anhui Province, China (Grant No. 2012SQRL040), and the Natural Science Foundation of Anhui Province, China (Grant No. KJ2012Z035).

  15. Shimming Halbach magnets utilizing genetic algorithms to profit from material imperfections.

    PubMed

    Parker, Anna J; Zia, Wasif; Rehorn, Christian W G; Blümich, Bernhard

    2016-04-01

    In recent years, permanent magnet-based NMR spectrometers have resurfaced as low-cost portable alternatives to superconducting instruments. While the development of these devices as well as clever shimming methods have yielded impressive advancements, scaling the size of these magnets to miniature lengths remains a problem to be addressed. Here we present the results of a study of a discrete shimming scheme for NMR Mandhalas constructed from a set of individual magnet blocks. While our calculations predict a modest reduction in field deviation by a factor of 9.3 in the case of the shimmed ideal Mandhala, a factor of 28 is obtained in the case of the shimmed imperfect Mandhala. This indicates that imperfections of magnet blocks can lead to improved field homogeneity. We also present a new algorithm to improve the homogeneity of a permanent magnet assembly. Strategies for future magnet construction can improve the agreement between simulation and practical implementation by using data from real magnets in these assemblies as the input to such an algorithm to optimize the homogeneity of a given design. Published by Elsevier Inc.

  16. Novel fabrication method of microchannel plates

    NASA Astrophysics Data System (ADS)

    Yi, Whikun; Jeong, Taewon; Jin, Sunghwan; Yu, SeGi; Lee, Jeonghee; Kim, J. M.

    2000-11-01

    We have developed a novel microchannel plate (MCP) by introducing new materials and process technologies. The key features of our MCP are summarized as follows: (i) bulk alumina as a substrate, (ii) the channel location defined by a programmed-hole puncher, (iii) thin film deposition by electroless plating and/or sol-gel process, and (iv) an easy fabrication process suitable for mass production and a large-sized MCP. The characteristics of the resulting MCP have been evaluated with a high input current source such as a continuous electron beam from an electron gun and Spindt-type field emitters to obtain information on electron multiplication. In the case of a 0.28 μA incident beam, the output current enhances ˜170 times, which is equal to 1% of the total bias current of the MCP at a given bias voltage of 2600 V. When we insert a MCP between the cathode and the anode of a field emission display panel, the brightness of luminescent light increases 3-4 times by multiplying the emitted electrons through pore arrays of a MCP.

  17. Ensemble Bayesian forecasting system Part I: Theory and algorithms

    NASA Astrophysics Data System (ADS)

    Herr, Henry D.; Krzysztofowicz, Roman

    2015-05-01

    The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.

  18. SWAP-Assembler 2: Optimization of De Novo Genome Assembler at Large Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jintao; Seo, Sangmin; Balaji, Pavan

    2016-08-16

    In this paper, we analyze and optimize the most time-consuming steps of the SWAP-Assembler, a parallel genome assembler, so that it can scale to a large number of cores for huge genomes with the size of sequencing data ranging from terabyes to petabytes. According to the performance analysis results, the most time-consuming steps are input parallelization, k-mer graph construction, and graph simplification (edge merging). For the input parallelization, the input data is divided into virtual fragments with nearly equal size, and the start position and end position of each fragment are automatically separated at the beginning of the reads. Inmore » k-mer graph construction, in order to improve the communication efficiency, the message size is kept constant between any two processes by proportionally increasing the number of nucleotides to the number of processes in the input parallelization step for each round. The memory usage is also decreased because only a small part of the input data is processed in each round. With graph simplification, the communication protocol reduces the number of communication loops from four to two loops and decreases the idle communication time. The optimized assembler is denoted as SWAP-Assembler 2 (SWAP2). In our experiments using a 1000 Genomes project dataset of 4 terabytes (the largest dataset ever used for assembling) on the supercomputer Mira, the results show that SWAP2 scales to 131,072 cores with an efficiency of 40%. We also compared our work with both the HipMER assembler and the SWAP-Assembler. On the Yanhuang dataset of 300 gigabytes, SWAP2 shows a 3X speedup and 4X better scalability compared with the HipMer assembler and is 45 times faster than the SWAP-Assembler. The SWAP2 software is available at https://sourceforge.net/projects/swapassembler.« less

  19. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Peripheral input and its importance for central sensitization.

    PubMed

    Baron, Ralf; Hans, Guy; Dickenson, Anthony H

    2013-11-01

    Many pain states begin with damage to tissue and/or nerves in the periphery, leading to enhanced transmitter release within the spinal cord and central sensitization. Manifestations of this central sensitization are windup and long-term potentiation. Hyperexcitable spinal neurons show reduced thresholds, greater evoked responses, increased receptive field sizes, and ongoing stimulus-independent activity; these changes probably underlie the allodynia, hyperalgesia, and spontaneous pain seen in patients. Central sensitization is maintained by continuing input from the periphery, but also modulated by descending controls, both inhibitory and facilitatory, from the midbrain and brainstem. The projections of sensitized spinal neurons to the brain, in turn, alter the processing of painful messages by higher centers. Several mechanisms contribute to central sensitization. Repetitive activation of primary afferent C fibers leads to a synaptic strengthening of nociceptive transmission. It may also induce facilitation of non-nociceptive Aβ fibers and nociceptive Aδ fibers, giving rise to dynamic mechanical allodynia and mechanical hyperalgesia. In postherpetic neuralgia and complex regional pain syndrome, for example, these symptoms are maintained and modulated by peripheral nociceptive input. Diagnosing central sensitization can be particularly difficult. In addition to the medical history, quantitative sensory testing and functional magnetic resonance imaging may be useful, but diagnostic criteria that include both subjective and objective measures of central augmentation are needed. Mounting evidence indicates that treatment strategies that desensitize the peripheral and central nervous systems are required. These should generally involve a multimodal approach, so that therapies may target the peripheral drivers of central sensitization and/or the central consequences. © 2013 American Neurological Association.

  1. Modeling earthquake magnitudes from injection-induced seismicity on rough faults

    NASA Astrophysics Data System (ADS)

    Maurer, J.; Dunham, E. M.; Segall, P.

    2017-12-01

    It is an open question whether perturbations to the in-situ stress field due to fluid injection affect the magnitudes of induced earthquakes. It has been suggested that characteristics such as the total injected fluid volume control the size of induced events (e.g., Baisch et al., 2010; Shapiro et al., 2011). On the other hand, Van der Elst et al. (2016) argue that the size distribution of induced earthquakes follows Gutenberg-Richter, the same as tectonic events. Numerical simulations support the idea that ruptures nucleating inside regions with high shear-to-effective normal stress ratio may not propagate into regions with lower stress (Dieterich et al., 2015; Schmitt et al., 2015), however, these calculations are done on geometrically smooth faults. Fang & Dunham (2013) show that rupture length on geometrically rough faults is variable, but strongly dependent on background shear/effective normal stress. In this study, we use a 2-D elasto-dynamic rupture simulator that includes rough fault geometry and off-fault plasticity (Dunham et al., 2011) to simulate earthquake ruptures under realistic conditions. We consider aggregate results for faults with and without stress perturbations due to fluid injection. We model a uniform far-field background stress (with local perturbations around the fault due to geometry), superimpose a poroelastic stress field in the medium due to injection, and compute the effective stress on the fault as inputs to the rupture simulator. Preliminary results indicate that even minor stress perturbations on the fault due to injection can have a significant impact on the resulting distribution of rupture lengths, but individual results are highly dependent on the details of the local stress perturbations on the fault due to geometric roughness.

  2. Essays on School Quality and Student Outcomes

    ERIC Educational Resources Information Center

    Crispin, Laura M.

    2012-01-01

    In my first chapter, I explore the relationship between school size and student achievement where, conditional on observable educational inputs, school size is a proxy for factors that are difficult to measure directly ( e.g., school climate and organization). Using data from the NELS:88, I estimate a series of value-added education production…

  3. Measuring Effect Sizes: The Effect of Measurement Error. Working Paper 19

    ERIC Educational Resources Information Center

    Boyd, Donald; Grossman, Pamela; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2008-01-01

    Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of "effect sizes", i.e., the estimated effect of a one standard deviation change in the…

  4. Drawing a representative sample from the NCSS soil database: Building blocks for the national wind erosion network

    USDA-ARS?s Scientific Manuscript database

    Developing national wind erosion models for the continental United States requires a comprehensive spatial representation of continuous soil particle size distributions (PSD) for model input. While the current coverage of soil survey is nearly complete, the most detailed particle size classes have c...

  5. Mobile input device type, texting style and screen size influence upper extremity and trapezius muscle activity, and cervical posture while texting.

    PubMed

    Kietrys, David M; Gerg, Michael J; Dropkin, Jonathan; Gold, Judith E

    2015-09-01

    This study aimed to determine the effects of input device type, texting style, and screen size on upper extremity and trapezius muscle activity and cervical posture during a short texting task in college students. Users of a physical keypad produced greater thumb, finger flexor, and wrist extensor muscle activity than when texting with a touch screen device of similar dimensions. Texting on either device produced greater wrist extensor muscle activity when texting with 1 hand/thumb compared with both hands/thumbs. As touch screen size increased, more participants held the device on their lap, and chose to use both thumbs less. There was also a trend for greater finger flexor, wrist extensor, and trapezius muscle activity as touch screen size increased, and for greater cervical flexion, although mean differences for cervical flexion were small. Future research can help inform whether the ergonomic stressors observed during texting are associated with musculoskeletal disorder risk. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  6. Field and numerical study of wind and surface waves at short fetches

    NASA Astrophysics Data System (ADS)

    Baydakov, Georgy; Kuznetsova, Alexandra; Sergeev, Daniil; Papko, Vladislav; Kandaurov, Alexander; Vdovin, Maxim; Troitskaya, Yuliya

    2016-04-01

    Measurements were carried out in 2012-2015 from May to October in the waters of Gorky Reservoir belonging to the Volga Cascade. The methods of the experiment focus on the study of airflow in the close proximity to the water surface. The sensors were positioned at the oceanographic Froude buoy including five two-component ultrasonic sensors WindSonic by Gill Instruments at different levels (0.1, 0.85, 1.3, 2.27, 5.26 meters above the mean water surface level), one water and three air temperature sensors, and three-channel wire wave gauge. One of wind sensors (0.1 m) was located on the float tracking the waveform for measuring the wind speed in the close proximity to the water surface. Basic parameters of the atmospheric boundary layer (the friction velocity u∗, the wind speed U10 and the drag coefficient CD) were calculated from the measured profiles of wind speed. Parameters were obtained in the range of wind speeds of 1-12 m/s. For wind speeds stronger than 4 m/s CD values were lower than those obtained before (see eg. [1,2]) and those predicted by the bulk parameterization. However, for weak winds (less than 3 m/s) CD values considerably higher than expected ones. The new parameterization of surface drag coefficient was proposed on the basis of the obtained data. The suggested parameterization of drag coefficient CD(U10) was implemented within wind input source terms in WAVEWATCH III [3]. The results of the numerical experiments were compared with the results obtained in the field experiments on the Gorky Reservoir. The use of the new drag coefficient improves the agreement in significant wave heights HS [4]. At the same time, the predicted mean wave periods are overestimated using both built-in source terms and adjusted source terms. We associate it with the necessity of the adjusting of the DIA nonlinearity model in WAVEWATCH III to the conditions of the middle-sized reservoir. Test experiments on the adjusting were carried out. The work was supported by the Russian Foundation for Basic Research (Grants No. 15-35-20953, 14-05-00367, 15-45-02580) and project ASIST of FP7. The field experiment is supported by Russian Science Foundation (Agreement No. 15-17-20009), numerical simulations are partially supported by Russian Science Foundation (Agreement No. 14-17-00667). References 1. A.V. Babanin, V.K. Makin Effects of wind trend and gustiness on the sea drag: Lake George study // Journal of Geophysical Research, 2008, 113, C02015, doi:10.1029/2007JC004233 2. S.S. Atakturk, K.B. Katsaros Wind Stress and Surface Waves Observed on Lake Washington // Journal of Physical Oceanography, 1999, 29, pp. 633-650 3. Kuznetsova A.M., Baydakov G.A., Papko V.V., Kandaurov A.A., Vdovin M.I., Sergeev D.A., Troitskaya Yu.I. Adjusting of wind input source term in WAVEWATCH III model for the middle-sized water body on the basis of the field experiment // Hindawi Publishing Corporation, Advances in Meteorology, 2016, Vol. 1, article ID 574602 4. G.A. Baydakov, A.M. Kuznetsova, D.A. Sergeev, V.V. Papko, A.A. Kandaurov, M.I. Vdovin, and Yu.I. Troitskaya Field study and numerical modeling of wind and surface waves at the middle-sized water body // Geophysical Research Abstracts, Vol.17, EGU2015-9427, Vienne, Austria, 2015.

  7. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size

    PubMed Central

    Gerstner, Wulfram

    2017-01-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. PMID:28422957

  8. A miniature pulse tube cryocooler used in a superspectral imager

    NASA Astrophysics Data System (ADS)

    Jiang, Zhenhua; Wu, Yinong

    2017-05-01

    In this paper, we describe a hihg0 frequency pulse tube cryocooler used in a superspectral imager to be launched in 2020. The superspectral imager is a field-dividing optical imaging system and uses 14 sets of integrated IR detector cryocooler dewar assembly. For the requirements of less heat loss an smaller size, each set is highly integrated by directly mounting the IR dectector's sapphire substrate on the pulse tube's cold tip, and welding the dewar's housing to the flange of the cold finger. Driven by a pair of moving magnet linear motors, the dual-opposed piston compressor of the croycooler is running at 120Hz. Filled with customized stainless screens in the regenerator, the cryolooler reaches 8.1% carnot efficiency at the cooling power of 1W@80K with 34Wac input power.

  9. Aeroacoustic interaction of a distributed vortex with a lifting Joukowski airfoil

    NASA Technical Reports Server (NTRS)

    Hardin, J. C.; Lamkin, S. L.

    1984-01-01

    A first principles computational aeroacoustics calculation of the flow and noise fields produced by the interaction of a distributed vortex with a lifting Joukowski airfoil is accomplished at the Reynolds number of 200. The case considered is that where the circulations of the vortex and the airfoil are of opposite sign, corresponding to blade vortex interaction on the retreating side of a single helicopter rotor. The results show that the flow is unsteady, even in the absence of the incoming vortex, resulting in trailing edge noise generation. After the vortex is input, it initially experiences a quite rapid apparent diffusion rate produced by stretching in the airfoil velocity gradients. Consideration of the effects of finite vortex size and viscosity causes the noise radiation during the encounter to be much less impulsive than predicted by previous analyses.

  10. Space shuttle SRM plume expansion sensitivity analysis. [flow characteristics of exhaust gases from solid propellant rocket engines

    NASA Technical Reports Server (NTRS)

    Smith, S. D.; Tevepaugh, J. A.; Penny, M. M.

    1975-01-01

    The exhaust plumes of the space shuttle solid rocket motors can have a significant effect on the base pressure and base drag of the shuttle vehicle. A parametric analysis was conducted to assess the sensitivity of the initial plume expansion angle of analytical solid rocket motor flow fields to various analytical input parameters and operating conditions. The results of the analysis are presented and conclusions reached regarding the sensitivity of the initial plume expansion angle to each parameter investigated. Operating conditions parametrically varied were chamber pressure, nozzle inlet angle, nozzle throat radius of curvature ratio and propellant particle loading. Empirical particle parameters investigated were mean size, local drag coefficient and local heat transfer coefficient. Sensitivity of the initial plume expansion angle to gas thermochemistry model and local drag coefficient model assumptions were determined.

  11. Solid state electro-optic color filter and iris

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A pair of solid state electro-optic filters (SSEF) in a binocular holder were designed and fabricated for evaluation of field sequential stereo TV applications. The electronic circuitry for use with the stereo goggles was designed and fabricated, requiring only an external video input. A polarizing screen suitable for attachment to various size TV monitors for use in conjunction with the stereo goggles was designed and fabricated. An improved engineering model 2 filter was fabricated using the bonded holder technique developed previously and integrated to a GCTA color TV camera. An engineering model color filter was fabricated and assembled using PLZT control elements. In addition, a ruggedized holder assembly was designed, fabricated and tested. This assembly provides electrical contacts, high voltage protection, and support for the fragile PLZT disk, and also permits mounting and optical alignment of the associated polarizers.

  12. Dispersal scaling from the world's rivers

    USGS Publications Warehouse

    Warrick, J.A.; Fong, D.A.

    2004-01-01

    Although rivers provide important biogeochemical inputs to oceans, there are currently no descriptive or predictive relationships of the spatial scales of these river influences. Our combined satellite, laboratory, field and modeling results show that the coastal dispersal areas of small, mountainous rivers exhibit remarkable self-similar scaling relationships over many orders of magnitude. River plume areas scale with source drainage area to a power significantly less than one (average = 0.65), and this power relationship decreases significantly with distance offshore of the river mouth. Observations of plumes from large rivers reveal that this scaling continues over six orders of magnitude of river drainage basin areas. This suggests that the cumulative area of coastal influence for many of the smallest rivers of the world is greater than that of single rivers of equal watershed size. Copyright 2004 by the American Geophysical Union.

  13. A Modeling Framework for Predicting the Size of Sediments Produced on Hillslopes and Supplied to Channels

    NASA Astrophysics Data System (ADS)

    Sklar, L. S.; Mahmoudi, M.

    2016-12-01

    Landscape evolution models rarely represent sediment size explicitly, despite the importance of sediment size in regulating rates of bedload sediment transport, river incision into bedrock, and many other processes in channels and on hillslopes. A key limitation has been the lack of a general model for predicting the size of sediments produced on hillslopes and supplied to channels. Here we present a framework for such a model, as a first step toward building a `geomorphic transport law' that balances mechanistic realism with computational simplicity and is widely applicable across diverse landscapes. The goal is to take as inputs landscape-scale boundary conditions such as lithology, climate and tectonics, and predict the spatial variation in the size distribution of sediments supplied to channels across catchments. The model framework has two components. The first predicts the initial size distribution of particles produced by erosion of bedrock underlying hillslopes, while the second accounts for the effects of physical and chemical weathering during transport down slopes and delivery to channels. The initial size distribution can be related to the spacing and orientation of fractures within bedrock, which depend on the stresses and deformation experienced during exhumation and on rock resistance to fracture propagation. Other controls on initial size include the sizes of mineral grains in crystalline rocks, the sizes of cemented particles in clastic sedimentary rocks, and the potential for characteristic size distributions produced by tree throw, frost cracking, and other erosional processes. To model how weathering processes transform the initial size distribution we consider the effects of erosion rate and the thickness of soil and weathered bedrock on hillslope residence time. Residence time determines the extent of size reduction, for given values of model terms that represent the potential for chemical and physical weathering. Chemical weathering potential is parameterized in terms of mean annual precipitation and temperature, and the fraction of soluble minerals. Physical weathering potential can be parameterized in terms of topographic attributes, including slope, curvature and aspect. Finally, we compare model predictions with field data from Inyo Creek in the Sierra Nevada Mtns, USA.

  14. Field Research Facility Data Integration Framework Data Management Plan: Survey Lines Dataset

    DTIC Science & Technology

    2016-08-01

    CHL and its District partners. The beach morphology surveys on which this report focuses provide quantitative measures of the dynamic nature of...topography • volume change 1.4 Data description The morphology surveys are conducted over a series of 26 shore- perpendicular profile lines spaced 50...dataset input data and products. Table 1. FRF survey lines dataset input data and products. Input Data FDIF Product Description ASCII LARC survey text

  15. SU-E-T-627: Precision Modelling of the Leaf-Bank Rotation in Elekta’s Agility MLC: Is It Necessary?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vujicic, M; Belec, J; Heath, E

    Purpose: To demonstrate the method used to determine the leaf bank rotation angle (LBROT) as a parameter for modeling the Elekta Agility multi-leaf collimator (MLC) for Monte Carlo simulations and to evaluate the clinical impact of LBROT. Methods: A detailed model of an Elekta Infinity linac including an Agility MLC was built using the EGSnrc/BEAMnrc Monte Carlo code. The Agility 160-leaf MLC is modelled using the MLCE component module which allows for leaf bank rotation using the parameter LBROT. A precise value of LBROT is obtained by comparing measured and simulated profiles of a specific field, which has leaves arrangedmore » in a repeated pattern such that one leaf is opened and the adjacent one is closed. Profile measurements from an Agility linac are taken with gafchromic film, and an ion chamber is used to set the absolute dose. The measurements are compared to Monte Carlo (MC) simulations and the LBROT is adjusted until a match is found. The clinical impact of LBROT is evaluated by observing how an MC dose calculation changes with LBROT. A clinical Stereotactic Body Radiation Treatment (SBRT) plan is calculated using BEAMnrc/DOSXYZnrc simulations with different input values for LBROT. Results: Using the method outlined above, the LBROT is determined to be 9±1 mrad. Differences as high as 4% are observed in a clinical SBRT plan between the extreme case (LBROT not modeled) and the nominal case. Conclusion: In small-field radiation therapy treatment planning, it is important to properly account for LBROT as an input parameter for MC dose calculations with the Agility MLC. More work is ongoing to elucidate the observed differences by determining the contributions from transmission dose, change in field size, and source occlusion, which are all dependent on LBROT. This work was supported by OCAIRO (Ontario Consortium of Adaptive Interventions in Radiation Oncology), funded by the Ontario Research Fund.« less

  16. A new scaling for divertor detachment

    NASA Astrophysics Data System (ADS)

    Goldston, R. J.; Reinke, M. L.; Schwartz, J. A.

    2017-05-01

    The ITER design, and future reactor designs, depend on divertor ‘detachment,’ whether partial, pronounced or complete, to limit heat flux to plasma-facing components and to limit surface erosion due to sputtering. It would be valuable to have a measure of the difficulty of achieving detachment as a function of machine parameters, such as input power, magnetic field, major radius, etc. Frequently the parallel heat flux, estimated typically as proportional to P sep/R or P sep B/R, is used as a proxy for this difficulty. Here we argue that impurity cooling is dependent on the upstream density, which itself must be limited by a Greenwald-like scaling. Taking this into account self-consistently, we find the impurity fraction required for detachment scales dominantly as power divided by poloidal magnetic field. The absence of any explicit scaling with machine size is concerning, as P sep surely must increase greatly for an economic fusion system, while increases in the poloidal field strength are limited by coil technology and plasma physics. This result should be challenged by comparison with 2D divertor codes and with measurements on existing experiments. Nonetheless, it suggests that higher magnetic field, stronger shaping, double-null operation, ‘advanced’ divertor configurations, as well as alternate means to handle heat flux such as metallic liquid and/or vapor targets merit greater attention.

  17. Cortical connective field estimates from resting state fMRI activity.

    PubMed

    Gravel, Nicolás; Harvey, Ben; Nordhjem, Barbara; Haak, Koen V; Dumoulin, Serge O; Renken, Remco; Curčić-Blake, Branislava; Cornelissen, Frans W

    2014-01-01

    One way to study connectivity in visual cortical areas is by examining spontaneous neural activity. In the absence of visual input, such activity remains shaped by the underlying neural architecture and, presumably, may still reflect visuotopic organization. Here, we applied population connective field (CF) modeling to estimate the spatial profile of functional connectivity in the early visual cortex during resting state functional magnetic resonance imaging (RS-fMRI). This model-based analysis estimates the spatial integration between blood-oxygen level dependent (BOLD) signals in distinct cortical visual field maps using fMRI. Just as population receptive field (pRF) mapping predicts the collective neural activity in a voxel as a function of response selectivity to stimulus position in visual space, CF modeling predicts the activity of voxels in one visual area as a function of the aggregate activity in voxels in another visual area. In combination with pRF mapping, CF locations on the cortical surface can be interpreted in visual space, thus enabling reconstruction of visuotopic maps from resting state data. We demonstrate that V1 ➤ V2 and V1 ➤ V3 CF maps estimated from resting state fMRI data show visuotopic organization. Therefore, we conclude that-despite some variability in CF estimates between RS scans-neural properties such as CF maps and CF size can be derived from resting state data.

  18. A new scaling for divertor detachment

    DOE PAGES

    Goldston, R. J.; Reinke, M. L.; Schwartz, J. A.

    2017-03-29

    The ITER design, and future reactor designs, depend on divertor `detachment,'whether partial, pronounced or complete, to limit heat flux to plasma-facing components and to limit surface erosion due to sputtering. It would be valuable to have a measure of the difficulty of achieving detachment as a function of machine parameters, such as input power, magnetic field, major radius, etc. Frequently the parallel heat flux, estimated typically as proportional to P-sep/R or PsepB/R, is used as a proxy for this difficulty. Here we argue that impurity cooling is dependent on the upstream density, which itself must be limited by a Greenwald-likemore » scaling. Taking this into account self-consistently, we find the impurity fraction required for detachment scales dominantly as power divided by poloidal magnetic field. The absence of any explicit scaling with machine size is concerning, as P-sep surely must increase greatly for an economic fusion system, while increases in the poloidal field strength are limited by coil technology and plasma physics. This result should be challenged by comparison with 2D divertor codes and with measurements on existing experiments. Nonetheless, it suggests that higher magnetic field, stronger shaping, double-null operation, `advanced' divertor configurations, as well as alternate means to handle heat flux such as metallic liquid and/or vapor targets merit greater attention.« less

  19. Spatial eigenmodes and synchronous oscillation: co-incidence detection in simulated cerebral cortex.

    PubMed

    Chapman, Clare L; Wright, James J; Bourke, Paul D

    2002-07-01

    Zero-lag synchronisation arises between points on the cerebral cortex receiving concurrent independent inputs; an observation generally ascribed to nonlinear mechanisms. Using simulations of cerebral cortex and Principal Component Analysis (PCA) we show patterns of zero-lag synchronisation (associated with empirically realistic spectral content) can arise from both linear and nonlinear mechanisms. For low levels of activation, we show the synchronous field is described by the eigenmodes of the resultant damped wave activity. The first and second spatial eigenmodes (which capture most of the signal variance) arise from the even and odd components of the independent input signals. The pattern of zero-lag synchronisation can be accounted for by the relative dominance of the first mode over the second, in the near-field of the inputs. The simulated cortical surface can act as a few millisecond response coincidence detector for concurrent, but uncorrelated, inputs. As cortical activation levels are increased, local damped oscillations in the gamma band undergo a transition to highly nonlinear undamped activity with 40 Hz dominant frequency. This is associated with "locking" between active sites and spatially segregated phase patterns. The damped wave synchronisation and the locked nonlinear oscillations may combine to permit fast representation of multiple patterns of activity within the same field of neurons.

  20. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones.

    PubMed

    Sohn, Bong-Soo

    2017-03-11

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.

  1. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones

    PubMed Central

    Sohn, Bong-Soo

    2017-01-01

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing. PMID:28287487

  2. Shrinking microbubbles with microfluidics: mathematical modelling to control microbubble sizes.

    PubMed

    Salari, A; Gnyawali, V; Griffiths, I M; Karshafian, R; Kolios, M C; Tsai, S S H

    2017-11-29

    Microbubbles have applications in industry and life-sciences. In medicine, small encapsulated bubbles (<10 μm) are desirable because of their utility in drug/oxygen delivery, sonoporation, and ultrasound diagnostics. While there are various techniques for generating microbubbles, microfluidic methods are distinguished due to their precise control and ease-of-fabrication. Nevertheless, sub-10 μm diameter bubble generation using microfluidics remains challenging, and typically requires expensive equipment and cumbersome setups. Recently, our group reported a microfluidic platform that shrinks microbubbles to sub-10 μm diameters. The microfluidic platform utilizes a simple microbubble-generating flow-focusing geometry, integrated with a vacuum shrinkage system, to achieve microbubble sizes that are desirable in medicine, and pave the way to eventual clinical uptake of microfluidically generated microbubbles. A theoretical framework is now needed to relate the size of the microbubbles produced and the system's input parameters. In this manuscript, we characterize microbubbles made with various lipid concentrations flowing in solutions that have different interfacial tensions, and monitor the changes in bubble size along the microfluidic channel under various vacuum pressures. We use the physics governing the shrinkage mechanism to develop a mathematical model that predicts the resulting bubble sizes and elucidates the dominant parameters controlling bubble sizes. The model shows a good agreement with the experimental data, predicting the resulting microbubble sizes under different experimental input conditions. We anticipate that the model will find utility in enabling users of the microfluidic platform to engineer bubbles of specific sizes.

  3. Positioning actuators in efficient locations for rendering the desired sound field using inverse approach

    NASA Astrophysics Data System (ADS)

    Cho, Wan-Ho; Ih, Jeong-Guon; Toi, Takeshi

    2015-12-01

    For rendering a desired characteristics of a sound field, a proper conditioning of acoustic actuators in an array are required, but the source condition depends strongly on its position. Actuators located at inefficient positions for control would consume the input power too much or become too much sensitive to disturbing noise. These actuators can be considered redundant, which should be sorted out as far as such elimination does not damage the whole control performance significantly. It is known that the inverse approach based on the acoustical holography concept, employing the transfer matrix between sources and field points as core element, is useful for rendering the desired sound field. By investigating the information indwelling in the transfer matrix between actuators and field points, the linear independency of an actuator from the others in the array can be evaluated. To this end, the square of the right singular vector, which means the radiation contribution from the source, can be used as an indicator. Inefficient position for fulfilling the desired sound field can be determined as one having smallest indicator value among all possible actuator positions. The elimination process continues one by one, or group by group, until the remaining number of actuators meets the preset number. Control examples of exterior and interior spaces are taken for the validation. The results reveal that the present method for choosing least dependent actuators, for a given number of actuators and field condition, is quite effective in realizing the desired sound field with a noisy input condition, and in minimizing the required input power.

  4. Fabrication of nanoparticles and nanostructures using ultrafast laser ablation of silver with Bessel beams

    NASA Astrophysics Data System (ADS)

    Krishna Podagatlapalli, G.; Hamad, Syed; Ahamad Mohiddon, Md; Venugopal Rao, S.

    2015-03-01

    Ablation of silver targets immersed in double distilled water (DDW)/acetone was performed with first order, non-diffracting Bessel beams generated by focusing ultrashort Gaussian pulses (~2 and ~40 fs) through an Axicon. The fabricated Ag dispersions were characterized by UV-visible absorption spectroscopy, transmission electron microscopy and the nanostructured Ag targets were characterized by field emission scanning electron microscopy. Ag colloids prepared with ~2 ps laser pulses at various input pulse energies of ~400, ~600, ~800 and ~1000 µJ demonstrated similar localized surface plasmon resonance (LSPR) peaks appearing near 407 nm. Analogous behavior was observed for Ag colloids prepared in acetone and ablated with ~40 fs pulses, wherein the LSPR peak was observed near 412 nm prepared with input energies of ~600, ~800 and ~1000 µJ. Observed parallels in LSPR peaks, average size of NPs, plasmon bandwidths are tentatively explained using cavitation bubble dynamics and simultaneous generation/fragmentation of NPs under the influence of Bessel beam. Fabricated Ag nanostructures in both the cases demonstrated strong enhancement factors (>106) in surface enhanced Raman scattering studies of the explosive molecule CL-20 (2,4,6,8,10,12-Hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane) at 5 μM concentration.

  5. Optimization of MR fluid Yield stress using Taguchi Method and Response Surface Methodology Techniques

    NASA Astrophysics Data System (ADS)

    Mangal, S. K.; Sharma, Vivek

    2018-02-01

    Magneto rheological fluids belong to a class of smart materials whose rheological characteristics such as yield stress, viscosity etc. changes in the presence of applied magnetic field. In this paper, optimization of MR fluid constituents is obtained with on-state yield stress as response parameter. For this, 18 samples of MR fluids are prepared using L-18 Orthogonal Array. These samples are experimentally tested on a developed & fabricated electromagnet setup. It has been found that the yield stress of MR fluid mainly depends on the volume fraction of the iron particles and type of carrier fluid used in it. The optimal combination of the input parameters for the fluid are found to be as Mineral oil with a volume percentage of 67%, iron powder of 300 mesh size with a volume percentage of 32%, oleic acid with a volume percentage of 0.5% and tetra-methyl-ammonium-hydroxide with a volume percentage of 0.7%. This optimal combination of input parameters has given the on-state yield stress as 48.197 kPa numerically. An experimental confirmation test on the optimized MR fluid sample has been then carried out and the response parameter thus obtained has found matching quite well (less than 1% error) with the numerically obtained values.

  6. A 16-Channel Nonparametric Spike Detection ASIC Based on EC-PC Decomposition.

    PubMed

    Wu, Tong; Xu, Jian; Lian, Yong; Khalili, Azam; Rastegarnia, Amir; Guan, Cuntai; Yang, Zhi

    2016-02-01

    In extracellular neural recording experiments, detecting neural spikes is an important step for reliable information decoding. A successful implementation in integrated circuits can achieve substantial data volume reduction, potentially enabling a wireless operation and closed-loop system. In this paper, we report a 16-channel neural spike detection chip based on a customized spike detection method named as exponential component-polynomial component (EC-PC) algorithm. This algorithm features a reliable prediction of spikes by applying a probability threshold. The chip takes raw data as input and outputs three data streams simultaneously: field potentials, band-pass filtered neural data, and spiking probability maps. The algorithm parameters are on-chip configured automatically based on input data, which avoids manual parameter tuning. The chip has been tested with both in vivo experiments for functional verification and bench-top experiments for quantitative performance assessment. The system has a total power consumption of 1.36 mW and occupies an area of 6.71 mm (2) for 16 channels. When tested on synthesized datasets with spikes and noise segments extracted from in vivo preparations and scaled according to required precisions, the chip outperforms other detectors. A credit card sized prototype board is developed to provide power and data management through a USB port.

  7. An adaptive semi-Lagrangian advection model for transport of volcanic emissions in the atmosphere

    NASA Astrophysics Data System (ADS)

    Gerwing, Elena; Hort, Matthias; Behrens, Jörn; Langmann, Bärbel

    2018-06-01

    The dispersion of volcanic emissions in the Earth atmosphere is of interest for climate research, air traffic control and human wellbeing. Current volcanic emission dispersion models rely on fixed-grid structures that often are not able to resolve the fine filamented structure of volcanic emissions being transported in the atmosphere. Here we extend an existing adaptive semi-Lagrangian advection model for volcanic emissions including the sedimentation of volcanic ash. The advection of volcanic emissions is driven by a precalculated wind field. For evaluation of the model, the explosive eruption of Mount Pinatubo in June 1991 is chosen, which was one of the largest eruptions in the 20th century. We compare our simulations of the climactic eruption on 15 June 1991 to satellite data of the Pinatubo ash cloud and evaluate different sets of input parameters. We could reproduce the general advection of the Pinatubo ash cloud and, owing to the adaptive mesh, simulations could be performed at a high local resolution while minimizing computational cost. Differences to the observed ash cloud are attributed to uncertainties in the input parameters and the course of Typhoon Yunya, which is probably not completely resolved in the wind data used to drive the model. The best results were achieved for simulations with multiple ash particle sizes.

  8. Population density equations for stochastic processes with memory kernels

    NASA Astrophysics Data System (ADS)

    Lai, Yi Ming; de Kamps, Marc

    2017-06-01

    We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.

  9. Influence of Environmental Variables on the Distribution of Macrobenthos in the Han River Estuary, Korea

    NASA Astrophysics Data System (ADS)

    Yu, Ok Hwan; Lee, Hyung-Gon; Lee, Jae-Hac

    2012-12-01

    We compared environmental effects on the macrobenthic community of the Han River Estuary in summer, when freshwater input from the Han River increased, and in spring, when freshwater input decreased. Field samples were taken from the upper region of the Shingok reservoir to the southern area of Ganghwado at 18 sampling sites after rainy (August 2006) and dry (March 2007) seasons. Macrobenthic fauna were collected using a Van Veen Grab (0.025 m2 and 0.1 m2) and environmental factors were measured simultaneously. Dominant species of macrobenthic fauna and the macrobenthic community were divided into two areas, the area of the Han River with no salinity (< 0.1 psu) and the southern part of Ganghwado with salinity (> 20 psu). The dominant species Byblis japonicus appeared at Junruri in the dry season. The distributions of two polychaetes, Hediste japonica and Nephtys caeca, were divided into the lower and upper areas of the Singok submerged weir. BIO-ENV (the matching of biotic to environmental patterns) analysis revealed that salinity was the most important factor affecting macrobenthic communities in the Han River Estuary, with other factors such as sediment grain size, bottom dissolved oxygen, and total organic carbon of sediment being secondary.

  10. Short-term plasticity as a neural mechanism supporting memory and attentional functions.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Andermann, Mark L; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2011-11-08

    Based on behavioral studies, several relatively distinct perceptual and cognitive functions have been defined in cognitive psychology such as sensory memory, short-term memory, and selective attention. Here, we review evidence suggesting that some of these functions may be supported by shared underlying neuronal mechanisms. Specifically, we present, based on an integrative review of the literature, a hypothetical model wherein short-term plasticity, in the form of transient center-excitatory and surround-inhibitory modulations, constitutes a generic processing principle that supports sensory memory, short-term memory, involuntary attention, selective attention, and perceptual learning. In our model, the size and complexity of receptive fields/level of abstraction of neural representations, as well as the length of temporal receptive windows, increases as one steps up the cortical hierarchy. Consequently, the type of input (bottom-up vs. top down) and the level of cortical hierarchy that the inputs target, determine whether short-term plasticity supports purely sensory vs. semantic short-term memory or attentional functions. Furthermore, we suggest that rather than discrete memory systems, there are continuums of memory representations from short-lived sensory ones to more abstract longer-duration representations, such as those tapped by behavioral studies of short-term memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. ProMC: Input-output data format for HEP applications using varint encoding

    NASA Astrophysics Data System (ADS)

    Chekanov, S. V.; May, E.; Strand, K.; Van Gemmeren, P.

    2014-10-01

    A new data format for Monte Carlo (MC) events, or any structural data, including experimental data, is discussed. The format is designed to store data in a compact binary form using variable-size integer encoding as implemented in the Google's Protocol Buffers package. This approach is implemented in the PROMC library which produces smaller file sizes for MC records compared to the existing input-output libraries used in high-energy physics (HEP). Other important features of the proposed format are a separation of abstract data layouts from concrete programming implementations, self-description and random access. Data stored in PROMC files can be written, read and manipulated in a number of programming languages, such C++, JAVA, FORTRAN and PYTHON.

  12. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    PubMed

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  13. Prediction model of sinoatrial node field potential using high order partial least squares.

    PubMed

    Feng, Yu; Cao, Hui; Zhang, Yanbin

    2015-01-01

    High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).

  14. Quantum dot-based local field imaging reveals plasmon-based interferometric logic in silver nanowire networks.

    PubMed

    Wei, Hong; Li, Zhipeng; Tian, Xiaorui; Wang, Zhuoxian; Cong, Fengzi; Liu, Ning; Zhang, Shunping; Nordlander, Peter; Halas, Naomi J; Xu, Hongxing

    2011-02-09

    We show that the local electric field distribution of propagating plasmons along silver nanowires can be imaged by coating the nanowires with a layer of quantum dots, held off the surface of the nanowire by a nanoscale dielectric spacer layer. In simple networks of silver nanowires with two optical inputs, control of the optical polarization and phase of the input fields directs the guided waves to a specific nanowire output. The QD-luminescent images of these structures reveal that a complete family of phase-dependent, interferometric logic functions can be performed on these simple networks. These results show the potential for plasmonic waveguides to support compact interferometric logic operations.

  15. Applied-field MPD thruster geometry effects

    NASA Technical Reports Server (NTRS)

    Myers, Roger M.

    1991-01-01

    Eight MPD thruster configurations were used to study the effects of applied field strength, propellant, and facility pressure on thruster performance. Vacuum facility background pressures higher than approx. 0.12 Pa were found to greatly influence thruster performance and electrode power deposition. Thrust efficiency and specific impulse increased monotonically with increasing applied field strength. Both cathode and anode radii fundamentally influenced the efficiency specific impulse relationship, while their lengths influence only the magnitude of the applied magnetic field required to reach a given performance level. At a given specific impulse, large electrode radii result in lower efficiencies for the operating conditions studied. For all test conditions, anode power deposition was the largest efficiency loss, and represented between 50 and 80 pct. of the input power. The fraction of the input power deposited into the anode decreased with increasing applied field and anode radii. The highest performance measured, 20 pct. efficiency at 3700 seconds specific impulse, was obtained using hydrogen propellant.

  16. Coherent perfect absorption in a quantum nonlinear regime of cavity quantum electrodynamics

    NASA Astrophysics Data System (ADS)

    Wei, Yang-hua; Gu, Wen-ju; Yang, Guoqing; Zhu, Yifu; Li, Gao-xiang

    2018-05-01

    Coherent perfect absorption (CPA) is investigated in the quantum nonlinear regime of cavity quantum electrodynamics (CQED), in which a single two-level atom couples to a single-mode cavity weakly driven by two identical laser fields. In the strong-coupling regime and due to the photon blockade effect, the weakly driven CQED system can be described as a quantum system with three polariton states. CPA is achieved at a critical input field strength when the frequency of the input fields matches the polariton transition frequency. In the quantum nonlinear regime, the incoherent dissipation processes such as atomic and photon decays place a lower bound for the purity of the intracavity quantum field. Our results show that under the CPA condition, the intracavity field always exhibits the quadrature squeezing property manifested by the quantum nonlinearity, and the outgoing photon flux displays the super-Poissonian distribution.

  17. Coronal heating by stochastic magnetic pumping

    NASA Technical Reports Server (NTRS)

    Sturrock, P. A.; Uchida, Y.

    1980-01-01

    Recent observational data cast serious doubt on the widely held view that the Sun's corona is heated by traveling waves (acoustic or magnetohydrodynamic). It is proposed that the energy responsible for heating the corona is derived from the free energy of the coronal magnetic field derived from motion of the 'feet' of magnetic field lines in the photosphere. Stochastic motion of the feet of magnetic field lines leads, on the average, to a linear increase of magnetic free energy with time. This rate of energy input is calculated for a simple model of a single thin flux tube. The model appears to agree well with observational data if the magnetic flux originates in small regions of high magnetic field strength. On combining this energy input with estimates of energy loss by radiation and of energy redistribution by thermal conduction, we obtain scaling laws for density and temperature in terms of length and coronal magnetic field strength.

  18. Experience-Related Changes in Place Cell Responses to New Sensory Configuration That Does Not Occur in the Natural Environment in the Rat Hippocampus

    PubMed Central

    Zou, Dan; Nishimaru, Hiroshi; Matsumoto, Jumpei; Takamura, Yusaku; Ono, Taketoshi; Nishijo, Hisao

    2017-01-01

    The hippocampal formation (HF) is implicated in a comparator that detects sensory conflict (mismatch) among convergent inputs. This suggests that new place cells encoding the new configuration with sensory mismatch develop after the HF learns to accept the new configuration as a match. To investigate this issue, HF CA1 place cell activity in rats was analyzed after the adaptation of the rats to the same sensory mismatch condition. The rats were placed on a treadmill on a stage that was translocated in a figure 8-shaped pathway. We recorded HF neuronal activities under three conditions; (1) an initial control session, in which both the stage and the treadmill moved forward, (2) a backward (mismatch) session, in which the stage was translocated backward while the rats locomoted forward on the treadmill, and (3) the second control session. Of the 161 HF neurons, 56 place-differential activities were recorded from the HF CA1 subfield. These place-differential activities were categorized into four types; forward-related, backward-related, both-translocation-related, and session-dependent. Forward-related activities showed predominant spatial firings in the forward sessions, while backward-related activities showed predominant spatial firings in the backward sessions. Both-translocation-related activities showed consistent spatial firings in both the forward and backward conditions. On the other hand, session-dependent activities showed different spatial firings across the sessions. Detailed analyses of the place fields indicated that mean place field sizes were larger in the forward-related, backward-related, and both-translocation-related activities than in the session-dependent activities. Furthermore, firing rate distributions in the place fields were negatively skewed and asymmetric, which is similar to place field changes that occur after repeated experience. These results demonstrate that the HF encodes a naturally impossible new configuration of sensory inputs after adaptation, suggesting that the HF is capable of updating its stored memory to accept a new configuration as a match by repeated experience. PMID:28878682

  19. How bees distinguish patterns by green and blue modulation

    PubMed Central

    Horridge, Adrian

    2015-01-01

    In the 1920s, Mathilde Hertz found that trained bees discriminated between shapes or patterns of similar size by something related to total length of contrasting contours. This input is now interpreted as modulation in green and blue receptor channels as flying bees scan in the horizontal plane. Modulation is defined as total contrast irrespective of sign multiplied by length of edge displaying that contrast, projected to vertical, therefore, combining structure and contrast in a single input. Contrast is outside the eye; modulation is a phasic response in receptor pathways inside. In recent experiments, bees trained to distinguish color detected, located, and measured three independent inputs and the angles between them. They are the tonic response of the blue receptor pathway and modulation of small-field green or (less preferred) blue receptor pathways. Green and blue channels interacted intimately at a peripheral level. This study explores in more detail how various patterns are discriminated by these cues. The direction of contrast at a boundary was not detected. Instead, bees located and measured total modulation generated by horizontal scanning of contrasts, irrespective of pattern. They also located the positions of isolated vertical edges relative to other landmarks and distinguished the angular widths between vertical edges by green or blue modulation alone. The preferred inputs were the strongest green modulation signal and angular width between outside edges, irrespective of color. In the absence of green modulation, the remaining cue was a measure and location of blue modulation at edges. In the presence of green modulation, blue modulation was inhibited. Black/white patterns were distinguished by the same inputs in blue and green receptor channels. Left–right polarity and mirror images could be discriminated by retinotopic green modulation alone. Colors in areas bounded by strong green contrast were distinguished as more or less blue than the background. The blue content could also be summed over the whole target. There were no achromatic patterns for bees and no evidence that they detected black, white, or gray levels apart from the differences in blue content or modulation at edges. Most of these cues would be sensitive to background color but some were influenced by changes in illumination. The bees usually learned only to avoid the unrewarded target. Exactly the same preferences of the same inputs were used in the detection of single targets as in discrimination between two targets. PMID:28539796

  20. MOM3D/EM-ANIMATE - MOM3D WITH ANIMATION CODE

    NASA Technical Reports Server (NTRS)

    Shaeffer, J. F.

    1994-01-01

    MOM3D (LAR-15074) is a FORTRAN method-of-moments electromagnetic analysis algorithm for open or closed 3-D perfectly conducting or resistive surfaces. Radar cross section with plane wave illumination is the prime analysis emphasis; however, provision is also included for local port excitation for computing antenna gain patterns and input impedances. The Electric Field Integral Equation form of Maxwell's equations is solved using local triangle couple basis and testing functions with a resultant system impedance matrix. The analysis emphasis is not only for routine RCS pattern predictions, but also for phenomenological diagnostics: bistatic imaging, currents, and near scattered/total electric fields. The images, currents, and near fields are output in form suitable for animation. MOM3D computes the full backscatter and bistatic radar cross section polarization scattering matrix (amplitude and phase), body currents and near scattered and total fields for plane wave illumination. MOM3D also incorporates a new bistatic k space imaging algorithm for computing down range and down/cross range diagnostic images using only one matrix inversion. MOM3D has been made memory and cpu time efficient by using symmetric matrices, symmetric geometry, and partitioned fixed and variable geometries suitable for design iteration studies. MOM3D may be run interactively or in batch mode on 486 IBM PCs and compatibles, UNIX workstations or larger computers. A 486 PC with 16 megabytes of memory has the potential to solve a 30 square wavelength (containing 3000 unknowns) symmetric configuration. Geometries are described using a triangular mesh input in the form of a list of spatial vertex points and a triangle join connection list. The EM-ANIMATE (LAR-15075) program is a specialized visualization program that displays and animates the near-field and surface-current solutions obtained from an electromagnetics program, in particular, that from MOM3D. The EM-ANIMATE program is windows based and contains a user-friendly, graphical interface for setting viewing options, case selection, file manipulation, etc. EM-ANIMATE displays the field and surface-current magnitude as smooth shaded color fields (color contours) ranging from a minimum contour value to a maximum contour value for the fields and surface currents. The program can display either the total electric field or the scattered electric field in either time-harmonic animation mode or in the root mean square (RMS) average mode. The default setting is initially set to the minimum and maximum values within the field and surface current data and can be optionally set by the user. The field and surface-current value are animated by calculating and viewing the solution at user selectable radian time increments between 0 and 2pi. The surface currents can also be displayed in either time-harmonic animation mode or in RMS average mode. In RMS mode, the color contours do not vary with time, but show the constant time averaged field and surface-current magnitude solution. The electric field and surface-current directions can be displayed as scaled vector arrows which have a length proportional to the magnitude at each field grid point or surface node point. These vector properties can be viewed separately or concurrently with the field or surface-current magnitudes. Animation speed is improved by turning off the display of the vector arrows. In RMS modes, the direction vectors are still displayed as varying with time since the time averaged direction vectors would be zero length vectors. Other surface properties can optionally be viewed. These include the surface grid, the resistance value assigned to each element of the grid, and the power dissipation of each element which has an assigned resistance value. The EM-ANIMATE program will accept up to 10 different surface current cases each consisting of up to 20,000 node points and 10,000 triangle definitions and will animate one of these cases. The capability is used to compare surface-current distribution due to various initial excitation directions or electric field orientations. The program can accept up to 50 planes of field data consisting of a grid of 100 by 100 field points. These planes of data are user selectable and can be viewed individually or concurrently. With these preset limits, the program requires 55 megabytes of core memory to run. These limits can be changed in the header files to accommodate the available core memory of an individual workstation. An estimate of memory required can be made as follows: approximate memory in bytes equals (number of nodes times number of surfaces times 14 variables times bytes per word, typically 4 bytes per floating point) plus (number of field planes times number of nodes per plane times 21 variables times bytes per word). This gives the approximate memory size required to store the field and surface-current data. The total memory size is approximately 400,000 bytes plus the data memory size. The animation calculations are performed in real time at any user set time step. For Silicon Graphics Workstations that have multiple processors, this program has been optimized to perform these calculations on multiple processors to increase animation rates. The optimized program uses the SGI PFA (Power FORTRAN Accelerator) library. On single processor machines, the parallelization directives are seen as comments to the program and will have no effect on compilation or execution. MOM3D and EM-ANIMATE are written in FORTRAN 77 for interactive or batch execution on SGI series computers running IRIX 3.0 or later. The RAM requirements for these programs vary with the size of the problem being solved. A minimum of 30Mb of RAM is required for execution of EM-ANIMATE; however, the code may be modified to accommodate the available memory of an individual workstation. For EM-ANIMATE, twenty-four bit, double-buffered color capability is suggested, but not required. Sample executables and sample input and output files are provided. Electronic documentation is provided for both EM-ANIMATE and MOM3D in PostScript format. Documentation for EM-ANIMATE is also provided in the form of IRIX man pages. The standard distribution medium for COS-10048 is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. MOM3D and EM-ANIMATE are also available separately as LAR-15074 and LAR-15075, respectively. MOM3D was developed in 1992. EM-ANIMATE was developed in 1993.

  1. Spatial dispersion effects upon local excitation of extrinsic plasmons in a graphene micro-disk

    NASA Astrophysics Data System (ADS)

    Mencarelli, D.; Bellucci, S.; Sindona, A.; Pierantoni, L.

    2015-11-01

    Excitation of surface plasmon waves in extrinsic graphene is studied using a full-wave electromagnetic field solver as analysis engine. Particular emphasis is placed on the role played by spatial dispersion due to the finite size of the two-dimensional material at the micro-scale. A simple instructive set up is considered where the near field of a wire antenna is held at sub-micrometric distance from a disk-shaped graphene patch. The key-input of the simulation is the graphene conductivity tensor at terahertz frequencies, being modeled by the Boltzmann transport equation for the valence and conduction electrons at the Dirac points (where a linear wave-vector dependence of the band energies is assumed). The conductivity equation is worked out in different levels of approximations, based on the relaxation time ansatz with an additional constraint for particle number conservation. Both drift and diffusion currents are shown to significantly contribute to the spatially dispersive anisotropic features of micro-scale graphene. More generally, spatial dispersion effects are predicted to influence not only plasmon propagation free of external sources, but also typical scanning probe microscopy configurations. The paper sets the focus on plasmon excitation phenomena induced by near field probes, being a central issue for the design of optical devices and photonic circuits.

  2. 14CO2 in combination with root-exclusion can be used to estimate plant-induced decomposition of soil organic matter

    NASA Astrophysics Data System (ADS)

    Heinonsalo, Jussi; Kulmala, Liisa; Mäkelä, Annikki; Oinonen, Markku; Fontaine, Sebastien; Palonen, Vesa; Pumpanen, Jukka

    2017-04-01

    In ecosystem models, the decomposition of soil organic matter (SOM) is estimated using temperature and moisture as main controlling parameters. However, there is increasing evidence that the decomposition is significantly affected by easily available carbohydrates. The C assimilation by the boreal forest trees will increase in the future due to climate change. As trees allocate large part of assimilated C to roots and soil microorganisms, particularly to ectomycorrhizal fungi, the rhizosphere priming effect (RPE) is assumed to increase. The aim of the experiment was to identify and quantify RPE in the field conditions. We established a three-year long trenching experiment in a boreal Scots pine forest where the belowground C flow from standing pine forest was controlled using root-exclusion with mesh fabrics. The mesh size of 1 μm excluded both tree roots and fungal hyphae and served as priming controls with decreased C supply. The unaltered C input entered the non-trenched field plots. Soil CO2 flux and 14C concentrations were measured. We were able to quantify the RPE in field conditions and show that plant-derived C flow into the soil increases SOM decomposition. Quantification of RPE allows more detailed estimation of soil organic matter decomposition in future changing climate.

  3. A study of the effectiveness and energy efficiency of ultrasonic emulsification.

    PubMed

    Li, Wu; Leong, Thomas S H; Ashokkumar, Muthupandian; Martin, Gregory J O

    2017-12-20

    Three essential experimental parameters in the ultrasonic emulsification process, namely sonication time, acoustic amplitude and processing volume, were individually investigated, theoretically and experimentally, and correlated to the emulsion droplet sizes produced. The results showed that with a decrease in droplet size, two kinetic regions can be separately correlated prior to reaching a steady state droplet size: a fast size reduction region and a steady state transition region. In the fast size reduction region, the power input and sonication time could be correlated to the volume-mean diameter by a power-law relationship, with separate power-law indices of -1.4 and -1.1, respectively. A proportional relationship was found between droplet size and processing volume. The effectiveness and energy efficiency of droplet size reduction was compared between ultrasound and high-pressure homogenisation (HPH) based on both the effective power delivered to the emulsion and the total electric power consumed. Sonication could produce emulsions across a broad range of sizes, while high-pressure homogenisation was able to produce emulsions at the smaller end of the range. For ultrasonication, the energy efficiency was higher at increased power inputs due to more effective droplet breakage at high ultrasound intensities. For HPH the consumed energy efficiency was improved by operating at higher pressures for fewer passes. At the laboratory scale, the ultrasound system required less electrical power than HPH to produce an emulsion of comparable droplet size. The energy efficiency of HPH is greatly improved at large scale, which may also be true for larger scale ultrasonic reactors.

  4. Re-Assessing the Measurement of Fogwater Inputs to a Tropical Ecosystem

    NASA Astrophysics Data System (ADS)

    Burkard, R.; Eugster, W.; Holwerda, F.; Bruijnzeel, S.; Scatena, F.; Siegwolf, R.

    2002-12-01

    For several years the hydrological importance of the fog- and cloudwater deposition to ecosystems in the tropics has been of great interest. In earlier studies carried out in the humid tropics the amount of deposited cloudwater was estimated by indirect methods based on the physical characteristics of the utilized cloudwater collector. In the temperate climatic zone of central Europe most of the studies dealing with cloudwater focus on the additional chemical input due to cloudwater in relation to the amount of deposited rainwater. During our experiment in the Luquillo mountains of Puerto Rico the different aspects of the chemical and hydrological impacts of cloudwater deposition have been investigated. During 43 days, cloudwater fluxes were measured with an eddy covariance setup consisting of a Solent ultrasonic anemometer and a size-resolving cloud droplet spectrometer. Cloudwater samples were taken with a Caltech-type active strand cloudwater collector. Additionally, measurements of rain, throughfall and stemflow were performed. Samples of fog, rain, throughfall and stemflow were analyzed for inorganic ion and stabile isotope concentrations (δ18O and δ2H). First analysis of the hydrological input show that there exist some significant differences in the deposited amount of cloudwater as measured with our instruments in comparison with previous studies carried out at the same location: Mean liquid water content was 78.6 mg m-3 during situations with a visibility below 1000 m (84% of the entire field campaign). The deposition rate of cloudwater was 0.88 mm d-1. A mismatch was found regarding the water balance. We conclude from this that the rainfall amount and therefore also the chemical input by rain is strongly underestimated due to wind-driven rain, which is not measured by standard rain gauges. Depending on the reference value, we have to conclude that the deposition of cloudwater accounts for 6--11% of wet deposition.

  5. A Self-Organizing Map-Based Approach to Generating Reduced-Size, Statistically Similar Climate Datasets

    NASA Astrophysics Data System (ADS)

    Cabell, R.; Delle Monache, L.; Alessandrini, S.; Rodriguez, L.

    2015-12-01

    Climate-based studies require large amounts of data in order to produce accurate and reliable results. Many of these studies have used 30-plus year data sets in order to produce stable and high-quality results, and as a result, many such data sets are available, generally in the form of global reanalyses. While the analysis of these data lead to high-fidelity results, its processing can be very computationally expensive. This computational burden prevents the utilization of these data sets for certain applications, e.g., when rapid response is needed in crisis management and disaster planning scenarios resulting from release of toxic material in the atmosphere. We have developed a methodology to reduce large climate datasets to more manageable sizes while retaining statistically similar results when used to produce ensembles of possible outcomes. We do this by employing a Self-Organizing Map (SOM) algorithm to analyze general patterns of meteorological fields over a regional domain of interest to produce a small set of "typical days" with which to generate the model ensemble. The SOM algorithm takes as input a set of vectors and generates a 2D map of representative vectors deemed most similar to the input set and to each other. Input predictors are selected that are correlated with the model output, which in our case is an Atmospheric Transport and Dispersion (T&D) model that is highly dependent on surface winds and boundary layer depth. To choose a subset of "typical days," each input day is assigned to its closest SOM map node vector and then ranked by distance. Each node vector is treated as a distribution and days are sampled from them by percentile. Using a 30-node SOM, with sampling every 20th percentile, we have been able to reduce 30 years of the Climate Forecast System Reanalysis (CFSR) data for the month of October to 150 "typical days." To estimate the skill of this approach, the "Measure of Effectiveness" (MOE) metric is used to compare area and overlap of statistical exceedance between the reduced data set and the full 30-year CFSR dataset. Using the MOE, we find that our SOM-derived climate subset produces statistics that fall within 85-90% overlap with the full set while using only 15% of the total data length, and consequently, 15% of the computational time required to run the T&D model for the full period.

  6. User's manual for MacPASCO

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Davis, R. C.

    1992-01-01

    A user's manual is presented for MacPASCO, which is an interactive, graphic, preprocessor for panel design. MacPASCO creates input for PASCO, an existing computer code for structural analysis and sizing of longitudinally stiffened composite panels. MacPASCO provides a graphical user interface which simplifies the specification of panel geometry and reduces user input errors. The user draws the initial structural geometry and reduces user input errors. The user draws the initial structural geometry on the computer screen, then uses a combination of graphic and text inputs to: refine the structural geometry; specify information required for analysis such as panel load and boundary conditions; and define design variables and constraints for minimum mass optimization. Only the use of MacPASCO is described, since the use of PASCO has been documented elsewhere.

  7. Mechanism of phase control in a klystron-like relativistic backward wave oscillator by an input signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Renzhen; Song, Zhimin; Deng, Yuqun

    Theoretical analyses and particle-in-cell (PIC) simulations are carried out to understand the mechanism of microwave phase control realized by the external RF signal in a klystron-like relativistic backward wave oscillator (RBWO). Theoretical calculations show that a modulated electron beam can lead the microwave field with an arbitrary initial phase to the same equilibrium phase, which is determined by the phase factor of the modulated current, and the difference between them is fixed. Furthermore, PIC simulations demonstrate that the phase of input signal has a close relation to that of modulated current, which initiates the phase of the irregularly microwave duringmore » the build-up of oscillation. Since the microwave field is weak during the early time of starting oscillation, it is easy to be induced, and a small input signal is sufficient to control the phase of output microwave. For the klystron-like RBWO with two pre-modulation cavities and a reentrant input cavity, an input signal with 100 kW power and 4.21 GHz frequency can control the phase of 5 GW output microwave with relative phase difference less than 6% when the diode voltage is 760 kV, and beam current is 9.8 kA, corresponding to a power ratio of output microwave to input signal of 47 dB.« less

  8. Collection Building for Interdisciplinary Research: An Analysis of Input/Output Factors.

    ERIC Educational Resources Information Center

    Wilson, Myoung Chung; Edelman, Hendrik

    Collection development and management in academic libraries continue to present a considerable challenge, especially in interdisciplinary fields. In order to ascertain patterns of interdisciplinary research, including the patterns of demand for bibliographic resources, this study analyzes the input/output factors that are related to the research…

  9. A Multifactor Approach to Research in Instructional Technology.

    ERIC Educational Resources Information Center

    Ragan, Tillman J.

    In a field such as instructional design, explanations of educational outcomes must necessarily consider multiple input variables. To adequately understand the contribution made by the independent variables, it is helpful to have a visual conception of how the input variables interrelate. Two variable models are adequately represented by a two…

  10. Design, Implementation and Evaluation of an Operating System for a Network of Transputers.

    DTIC Science & Technology

    1987-06-01

    WHILE TRUE -- listen to linki SEQ receiving the header BYTE.SLICE.INPUT (linkl,headerl,1,header.size) -- decoding the block size block.sizelLO] z...I’m done BYTE.SLICE.OUTPUT (screen[0] ,header0,3,1) WHILE TRUE -- listen to linki SEQ- rec eiving the header BYTE.SLICE. IPUT (linkl,headerl,1

  11. On-chip photonic transistor based on the spike synchronization in circuit QED

    NASA Astrophysics Data System (ADS)

    Gül, Yusuf

    2018-03-01

    We consider the single photon transistor in coupled cavity system of resonators interacting with multilevel superconducting artificial atom simultaneously. Effective single mode transformation is used for the diagonalization of the Hamiltonian and impedance matching in terms of the normal modes. Storage and transmission of the incident field are described by the interactions between the cavities controlling the atomic transitions of lowest lying states. Rabi splitting of vacuum-induced multiphoton transitions is considered in input/output relations by the quadrature operators in the absence of the input field. Second-order coherence functions are employed to investigate the photon blockade and delocalization-localization transitions of cavity fields. Spontaneous virtual photon conversion into real photons is investigated in localized and oscillating regimes. Reflection and transmission of cavity output fields are investigated in the presence of the multilevel transitions. Accumulation and firing of the reflected and transmitted fields are used to investigate the synchronization of the bunching spike train of transmitted field and population imbalance of cavity fields. In the presence of single photon gate field, gain enhancement is explained for transmitted regime.

  12. Improving alpine-region spectral unmixing with optimal-fit snow endmembers

    NASA Technical Reports Server (NTRS)

    Painter, Thomas H.; Roberts, Dar A.; Green, Robert O.; Dozier, Jeff

    1995-01-01

    Surface albedo and snow-covered-area (SCA) are crucial inputs to the hydrologic and climatologic modeling of alpine and seasonally snow-covered areas. Because the spectral albedo and thermal regime of pure snow depend on grain size, areal distribution of snow grain size is required. Remote sensing has been shown to be an effective (and necessary) means of deriving maps of grain size distribution and snow-covered-area. Developed here is a technique whereby maps of grain size distribution improve estimates of SCA from spectral mixture analysis with AVIRIS data.

  13. Can dust emission mechanisms be determined from field measurements?

    NASA Astrophysics Data System (ADS)

    Klose, Martina; Webb, Nicholas; Gill, Thomas E.; Van Pelt, Scott; Okin, Gregory

    2017-04-01

    Field observations are needed to develop and test theories on dust emission for use in dust modeling systems. The dust emission mechanism (aerodynamic entrainment, saltation bombardment, aggregate disintegration) as well as the amount and particle-size distribution of emitted dust may vary under sediment supply- and transport-limited conditions. This variability, which is caused by heterogeneity of the surface and the atmosphere, cannot be fully captured in either field measurements or models. However, uncertainty in dust emission modeling can be reduced through more detailed observational data on the dust emission mechanism itself. To date, most measurements do not provide enough information to allow for a determination of the mechanisms leading to dust emission and often focus on a small variety of soil and atmospheric settings. Additionally, data sets are often not directly comparable due to different measurement setups. As a consequence, the calibration of dust emission schemes has so far relied on a selective set of observations, which leads to an idealization of the emission process in models and thus affects dust budget estimates. Here, we will present results of a study which aims to decipher the dust emission mechanism from field measurements as an input for future model development. Detailed field measurements are conducted, which allow for a comparison of dust emission for different surface and atmospheric conditions. Measurements include monitoring of the surface, loose erodible material, transported sediment, and meteorological data, and are conducted in different environmental settings in the southwestern United States. Based on the field measurements, a method is developed to differentiate between the different dust emission mechanisms.

  14. Report on the B-Fields at NIF Workshop Held at LLNL October 12-13, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournier, K. B.; Moody, J. D.

    2015-12-13

    A national ICF laboratory workshop on requirements for a magnetized target capability on NIF was held by NIF at LLNL on October 12 and 13, attended by experts from LLNL, SNL, LLE, LANL, GA, and NRL. Advocates for indirect drive (LLNL), magnetic (Z) drive (SNL), polar direct drive (LLE), and basic science needing applied B (many institutions) presented and discussed requirements for the magnetized target capabilities they would like to see. 30T capability was most frequently requested. A phased operation increasing the field in steps experimentally can be envisioned. The NIF management will take the inputs from the scientific communitymore » represented at the workshop and recommend pulse-powered magnet parameters for NIF that best meet the collective user requests. In parallel, LLNL will continue investigating magnets for future generations that might be powered by compact laser-B-field generators (Moody, Fujioka, Santos, Woolsey, Pollock). The NIF facility engineers will start to analyze compatibility of the recommended pulsed magnet parameters (size, field, rise time, materials) with NIF chamber constraints, diagnostic access, and final optics protection against debris in FY16. The objective of this assessment will be to develop a schedule for achieving an initial Bfield capability. Based on an initial assessment, room temperature magnetized gas capsules will be fielded on NIF first. Magnetized cryo-ice-layered targets will take longer (more compatibility issues). Magnetized wetted foam DT targets (Olson) may have somewhat fewer compatibility issues making them a more likely choice for the first cryo-ice-layered target fielded with applied Bz.« less

  15. Reserve growth of the world's giant oil fields

    USGS Publications Warehouse

    Klett, T.R.; Schmoker, J.W.

    2005-01-01

    Analysis of estimated total recoverable oil volume (field size) of 186 well-known giant oil fields of the world (>0.5 billion bbl of oil, discovered prior to 1981), exclusive of the United States and Canada, demonstrates general increases in field sizes through time. Field sizes were analyzed as a group and within subgroups of the Organization of Petroleum Exporting Countries (OPEC) and non-OPEC countries. From 1981 through 1996, the estimated volume of oil in the 186 fields for which adequate data were available increased from 617 billion to 777 billion bbl of oil (26%). Processes other than new field discoveries added an estimated 160 billion bbl of oil to known reserves in this subset of the world's oil fields. Although methods for estimating field sizes vary among countries, estimated sizes of the giant oil fields of the world increased, probably for many of the same reasons that estimated sizes of oil fields in the United States increased over the same time period. Estimated volumes in OPEC fields increased from a total of 550 billion to 668 billion bbl of oil and volumes in non-OPEC fields increased from 67 billion to 109 billion bbl of oil. In terms of percent change, non-OPEC field sizes increased more than OPEC field sizes (63% versus 22%). The changes in estimated total recoverable oil volumes that occurred within three 5-year increments between 1981 and 1996 were all positive. Between 1981 and 1986, the increase in estimated total recoverable oil volume within the 186 giant oil fields was 11 billion bbl of oil; between 1986 and 1991, the increase was 120 billion bbl of oil; and between 1991 and 1996, the increase was 29 billion bbl of oil. Fields in both OPEC and non-OPEC countries followed trends of substantial reserve growth.

  16. How important is the spatiotemporal structure of a rainfall field when generating a streamflow hydrograph? An investigation using Reverse Hydrology

    NASA Astrophysics Data System (ADS)

    Kretzschmar, Ann; Tych, Wlodek; Beven, Keith; Chappell, Nick

    2017-04-01

    Flooding is the most widely occurring natural disaster affecting thousands of lives and businesses worldwide each year, and the size and frequency of flood-events are predicted to increase with climate change. The main input-variable for models used in flood prediction is rainfall. Estimating the rainfall input is often based on a sparse network of raingauges, which may or may not be representative of the salient rainfall characteristics responsible for generating of storm-hydrographs. A method based on Reverse Hydrology (Kretzschmar et al 2014 Environ Modell Softw) has been developed and is being tested using the intensively-instrumented Brue catchment (Southwest England) to explore the spatiotemporal structure of the rainfall-field (using 23 rain gauges over the 135.2 km2 basin). We compare how well the rainfall measured at individual gauges, or averaged over the basin, represent the rainfall inferred from the streamflow signal. How important is it to get the detail of the spatiotemporal rainfall structure right? Rainfall is transformed by catchment processes as it moves to streams, so exact duplication of the structure may not be necessary. 'True' rainfall estimated using 23 gauges / 135.2 km2 is likely to be a good estimate of the overall-catchment-rainfall, however, the integration process 'smears' the rainfall patterns in time, i.e. reduces the number of and lengthens rain-events as they travel across the catchment. This may have little impact on the simulation of stream-hydrographs when events are extensive across the catchment (e.g., frontal rainfall events) but may be significant for high-intensity, localised convective events. The Reverse Hydrology approach uses the streamflow record to infer a rainfall sequence with a lower time-resolution than the original input time-series. The inferred rainfall series is, however, able simulate streamflow as well as the observed, high resolution rainfall (Kretzschmar et al 2015 Hydrol Res). Most gauged catchments in the UK of a similar size would only have data available for 1 to 3 raingauges. The high density of the Brue raingauge network allows a good estimate of the 'True' catchment rainfall to be made and compared with data from an individual raingauge as if that was the only data available. In addition the rainfall from each raingauge is compared with rainfall inferred from streamflow using data from the selected individual raingauge, and also inferred from the full catchment network. The stochastic structure of the rainfall from all of these datasets is compared using a combination of traditional statistical measures, i.e., the first 4 moments of rainfall totals and its residuals; plus the number, length and distribution of wet and dry periods; rainfall intensity characteristics; and their ability to generate the observed stream hydrograph. Reverse Hydrology, which utilises information present in both the input rainfall and the output hydrograph, has provided a method of investigating the quality of the information each gauge adds to the catchment-average (Kretzschmar et al 2016 Procedia Eng.). Further, it has been used to ascertain how important reproducing the detailed rainfall structure really is, when used for flow prediction.

  17. New method for solving inductive electric fields in the non-uniformly conducting ionosphere

    NASA Astrophysics Data System (ADS)

    Vanhamäki, H.; Amm, O.; Viljanen, A.

    2006-10-01

    We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.

  18. Computing Shapes Of Cascade Diffuser Blades

    NASA Technical Reports Server (NTRS)

    Tran, Ken; Prueger, George H.

    1993-01-01

    Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.

  19. A respiratory compensating system: design and performance evaluation.

    PubMed

    Chuang, Ho-Chiao; Huang, Ding-Yang; Tien, Der-Chi; Wu, Ren-Hong; Hsu, Chung-Hsien

    2014-05-08

    This study proposes a respiratory compensating system which is mounted on the top of the treatment couch for reverse motion, opposite from the direction of the targets (diaphragm and hemostatic clip), in order to offset organ displacement generated by respiratory motion. Traditionally, in the treatment of cancer patients, doctors must increase the field size for radiation therapy of tumors because organs move with respiratory motion, which causes radiation-induced inflammation on the normal tissues (organ at risk (OAR)) while killing cancer cells, and thereby reducing the patient's quality of life. This study uses a strain gauge as a respiratory signal capture device to obtain abdomen respiratory signals, a proposed respiratory simulation system (RSS) and respiratory compensating system to experiment how to offset the organ displacement caused by respiratory movement and compensation effect. This study verifies the effect of the respiratory compensating system in offsetting the target displacement using two methods. The first method uses linac (medical linear accelerator) to irradiate a 300 cGy dose on the EBT film (GAFCHROMIC EBT film). The second method uses a strain gauge to capture the patients' respiratory signals, while using fluoroscopy to observe in vivo targets, such as a diaphragm, to enable the respiratory compensating system to offset the displacements of targets in superior-inferior (SI) direction. Testing results show that the RSS position error is approximately 0.45 ~ 1.42 mm, while the respiratory compensating system position error is approximately 0.48 ~ 1.42 mm. From the EBT film profiles based on different input to the RSS, the results suggest that when the input respiratory signals of RSS are sine wave signals, the average dose (%) in the target area is improved by 1.4% ~ 24.4%, and improved in the 95% isodose area by 15.3% ~ 76.9% after compensation. If the respiratory signals input into the RSS respiratory signals are actual human respiratory signals, the average dose (%) in the target area is improved by 31.8% ~ 67.7%, and improved in the 95% isodose area by 15.3% ~ 86.4% (the above rates of improvements will increase with increasing respiratory motion displacement) after compensation. The experimental results from the second method suggested that about 67.3% ~ 82.5% displacement can be offset. In addition, gamma passing rate after compensation can be improved to 100% only when the displacement of the respiratory motion is within 10 ~ 30 mm. This study proves that the proposed system can contribute to the compensation of organ displacement caused by respiratory motion, enabling physicians to use lower doses and smaller field sizes in the treatment of tumors of cancer patients.

  20. A respiratory compensating system: design and performance evaluation

    PubMed Central

    Huang, Ding‐Yang; Tien, Der‐Chi; Wu, Ren‐Hong; Hsu, Chung‐Hsien

    2014-01-01

    This study proposes a respiratory compensating system which is mounted on the top of the treatment couch for reverse motion, opposite from the direction of the targets (diaphragm and hemostatic clip), in order to offset organ displacement generated by respiratory motion. Traditionally, in the treatment of cancer patients, doctors must increase the field size for radiation therapy of tumors because organs move with respiratory motion, which causes radiation‐induced inflammation on the normal tissues (organ at risk (OAR)) while killing cancer cells, and thereby reducing the patient's quality of life. This study uses a strain gauge as a respiratory signal capture device to obtain abdomen respiratory signals, a proposed respiratory simulation system (RSS) and respiratory compensating system to experiment how to offset the organ displacement caused by respiratory movement and compensation effect. This study verifies the effect of the respiratory compensating system in offsetting the target displacement using two methods. The first method uses linac (medical linear accelerator) to irradiate a 300 cGy dose on the EBT film (GAFCHROMIC EBT film). The second method uses a strain gauge to capture the patients' respiratory signals, while using fluoroscopy to observe in vivo targets, such as a diaphragm, to enable the respiratory compensating system to offset the displacements of targets in superior‐inferior (SI) direction. Testing results show that the RSS position error is approximately 0.45 ~ 1.42 mm, while the respiratory compensating system position error is approximately 0.48 ~ 1.42 mm. From the EBT film profiles based on different input to the RSS, the results suggest that when the input respiratory signals of RSS are sine wave signals, the average dose (%) in the target area is improved by 1.4% ~ 24.4%, and improved in the 95% isodose area by 15.3% ~ 76.9% after compensation. If the respiratory signals input into the RSS respiratory signals are actual human respiratory signals, the average dose (%) in the target area is improved by 31.8% ~ 67.7%, and improved in the 95% isodose area by 15.3% ~ 86.4% (the above rates of improvements will increase with increasing respiratory motion displacement) after compensation. The experimental results from the second method suggested that about 67.3% ~ 82.5% displacement can be offset. In addition, gamma passing rate after compensation can be improved to 100% only when the displacement of the respiratory motion is within 10 ~ 30 mm. This study proves that the proposed system can contribute to the compensation of organ displacement caused by respiratory motion, enabling physicians to use lower doses and smaller field sizes in the treatment of tumors of cancer patients. PACS number: 87.19. Wx; 87.55. Km PMID:24892345

  1. Performance of McRAS-AC in the GEOS-5 AGCM: Part 1, Aerosol-Activated Cloud Microphysics, Precipitation, Radiative Effects, and Circulation

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Lee, D.; Oreopoulos, L.; Barahona, D.; Nenes, A.; Suarez, M. J.

    2012-01-01

    A revised version of the Microphysics of clouds with Relaxed Arakawa-Schubert and Aerosol-Cloud interaction (McRAS-AC), including, among others, the Barahona and Nenes ice nucleation parameterization, is implemented in the GEOS-5 AGCM. Various fields from a 10-year long integration of the AGCM with McRAS-AC were compared with their counterparts from an integration of the baseline GEOS-5 AGCM, and with satellite data as observations. Generally using McRAS-AC reduced biases in cloud fields and cloud radiative effects are much better over most of the regions of the Earth. Two weaknesses are identified in the McRAS-AC runs, namely, too few cloud particles around 40S-60S, and too high cloud water path during northern hemisphere summer over the Gulf Stream and North Pacific. Sensitivity analyses showed that these biases potentially originated from biases in the aerosol input. The first bias is largely eliminated in a sensitivity test using 50% smaller aerosol particles, while the second bias is much reduced when interactive aerosol chemistry was turned on. The main drawback of McRAS-AC is dearth of low-level marine stratus clouds, probably due to lack of dry-convection, not yet implemented into the cloud scheme. Despite these biases, McRAS-AC does simulate realistic clouds and their optical properties that can improve with better aerosol-input and thereby has the potential to be a valuable tool for climate modeling research because of its aerosol indirect effect simulation capabilities involving prediction of cloud particle number concentration and effective particle size for both convective and stratiform clouds is quite realistic.

  2. Real-time image restoration for iris recognition systems.

    PubMed

    Kang, Byung Jun; Park, Kang Ryoung

    2007-12-01

    In the field of biometrics, it has been reported that iris recognition techniques have shown high levels of accuracy because unique patterns of the human iris, which has very many degrees of freedom, are used. However, because conventional iris cameras have small depth-of-field (DOF) areas, input iris images can easily be blurred, which can lead to lower recognition performance, since iris patterns are transformed by the blurring caused by optical defocusing. To overcome these problems, an autofocusing camera can be used. However, this inevitably increases the cost, size, and complexity of the system. Therefore, we propose a new real-time iris image-restoration method, which can increase the camera's DOF without requiring any additional hardware. This paper presents five novelties as compared to previous works: 1) by excluding eyelash and eyelid regions, it is possible to obtain more accurate focus scores from input iris images; 2) the parameter of the point spread function (PSF) can be estimated in terms of camera optics and measured focus scores; therefore, parameter estimation is more accurate than it has been in previous research; 3) because the PSF parameter can be obtained by using a predetermined equation, iris image restoration can be done in real-time; 4) by using a constrained least square (CLS) restoration filter that considers noise, performance can be greatly enhanced; and 5) restoration accuracy can also be enhanced by estimating the weight value of the noise-regularization term of the CLS filter according to the amount of image blurring. Experimental results showed that iris recognition errors when using the proposed restoration method were greatly reduced as compared to those results achieved without restoration or those achieved using previous iris-restoration methods.

  3. Graded, Dynamically Routable Information Processing with Synfire-Gated Synfire Chains.

    PubMed

    Wang, Zhuo; Sornborger, Andrew T; Tao, Louis

    2016-06-01

    Coherent neural spiking and local field potentials are believed to be signatures of the binding and transfer of information in the brain. Coherent activity has now been measured experimentally in many regions of mammalian cortex. Recently experimental evidence has been presented suggesting that neural information is encoded and transferred in packets, i.e., in stereotypical, correlated spiking patterns of neural activity. Due to their relevance to coherent spiking, synfire chains are one of the main theoretical constructs that have been appealed to in order to describe coherent spiking and information transfer phenomena. However, for some time, it has been known that synchronous activity in feedforward networks asymptotically either approaches an attractor with fixed waveform and amplitude, or fails to propagate. This has limited the classical synfire chain's ability to explain graded neuronal responses. Recently, we have shown that pulse-gated synfire chains are capable of propagating graded information coded in mean population current or firing rate amplitudes. In particular, we showed that it is possible to use one synfire chain to provide gating pulses and a second, pulse-gated synfire chain to propagate graded information. We called these circuits synfire-gated synfire chains (SGSCs). Here, we present SGSCs in which graded information can rapidly cascade through a neural circuit, and show a correspondence between this type of transfer and a mean-field model in which gating pulses overlap in time. We show that SGSCs are robust in the presence of variability in population size, pulse timing and synaptic strength. Finally, we demonstrate the computational capabilities of SGSC-based information coding by implementing a self-contained, spike-based, modular neural circuit that is triggered by streaming input, processes the input, then makes a decision based on the processed information and shuts itself down.

  4. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    PubMed

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Assessing the efficiency of hospital pharmacy services in Thai public district hospitals.

    PubMed

    Rattanachotphanit, Thananan; Limwattananon, Chulaporn; Limwattananon, Supon; Johns, Jeff R; Schommer, Jon C; Brown, Lawrence M

    2008-07-01

    The purpose of this study was to assess the efficiency of hospital pharmacy services and to determine the environmental factors affecting pharmacy service efficiency. The technical efficiency of a hospital pharmacy was assessed to evaluate the hospital's ability to use pharmacy manpower in order to produce the maximum output of the pharmacy service. Data Envelopment Analysis (DEA) was used as an efficiency measurement. The two labor inputs were pharmacists and support personnel and the ten outputs were from four pharmacy activities: drug dispensing, drug purchasing and inventory control, patient-oriented activities, and health consumer protection services. This was used to estimate technical efficiency. A Tobit regression model was used to determine the effect of the hospital size, location, input mix of pharmacy staff, working experience of pharmacists at the study hospitals, and use of technology on the pharmacy service efficiency. Data for pharmacy service input and output quantities were obtained from 155 respondents. Nineteen percent were found to have full efficiency with a technical efficiency score of 1.00. Thirty-six percent had a technical efficiency score of 0.80 or above and 27% had a low technical efficiency score (< 0.60). The average TE score increased in respect to the hospital size (0.60, 0.71, 0.75, and 0.83 in 10, 30, 60, and 90-120 bed hospitals, respectively). Hospital size and geographic location were significantly associated with pharmacy service efficiency.

  6. Diagnostics of Loss of Coolant Accidents Using SVC and GMDH Models

    NASA Astrophysics Data System (ADS)

    Lee, Sung Han; No, Young Gyu; Na, Man Gyun; Ahn, Kwang-Il; Park, Soo-Yong

    2011-02-01

    As a means of effectively managing severe accidents at nuclear power plants, it is important to identify and diagnose accident initiating events within a short time interval after the accidents by observing the major measured signals. The main objective of this study was to diagnose loss of coolant accidents (LOCAs) using artificial intelligence techniques, such as SVC (support vector classification) and GMDH (group method of data handling). In this study, the methodologies of SVC and GMDH models were utilized to discover the break location and estimate the break size of the LOCA, respectively. The 300 accident simulation data (based on MAAP4) were used to develop the SVC and GMDH models, and the 33 test data sets were used to independently confirm whether or not the SVC and GMDH models work well. The measured signals from the reactor coolant system, steam generators, and containment at a nuclear power plant were used as inputs to the models, and the 60 sec time-integrated values of the input signals were used as inputs into the SVC and GMDH models. The simulation results confirmed that the proposed SVC model can identify the break location and the proposed GMDH models can estimate the break size accurately. In addition, even if the measurement errors exist and safety systems actuate, the proposed SVC and GMDH models can discover the break locations without a misclassification and accurately estimate the break size.

  7. Molecular density functional theory of water describing hydrophobicity at short and long length scales

    NASA Astrophysics Data System (ADS)

    Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel

    2013-10-01

    We present an extension of our recently introduced molecular density functional theory of water [G. Jeanmairet et al., J. Phys. Chem. Lett. 4, 619 (2013)] to the solvation of hydrophobic solutes of various sizes, going from angstroms to nanometers. The theory is based on the quadratic expansion of the excess free energy in terms of two classical density fields: the particle density and the multipolar polarization density. Its implementation requires as input a molecular model of water and three measurable bulk properties, namely, the structure factor and the k-dependent longitudinal and transverse dielectric susceptibilities. The fine three-dimensional water structure around small hydrophobic molecules is found to be well reproduced. In contrast, the computed solvation free-energies appear overestimated and do not exhibit the correct qualitative behavior when the hydrophobic solute is grown in size. These shortcomings are corrected, in the spirit of the Lum-Chandler-Weeks theory, by complementing the functional with a truncated hard-sphere functional acting beyond quadratic order in density, and making the resulting functional compatible with the Van-der-Waals theory of liquid-vapor coexistence at long range. Compared to available molecular simulations, the approach yields reasonable solvation structure and free energy of hard or soft spheres of increasing size, with a correct qualitative transition from a volume-driven to a surface-driven regime at the nanometer scale.

  8. NIH Seeks Input on In-patient Clinical Research Areas | Division of Cancer Prevention

    Cancer.gov

    [[{"fid":"2476","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of the National Institutes of Health Clinical Center (Building 10) in Bethesda, Maryland.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of

  9. Design of off-statistics axial-flow fans by means of vortex law optimization

    NASA Astrophysics Data System (ADS)

    Lazari, Andrea; Cattanei, Andrea

    2014-12-01

    Off-statistics input data sets are common in axial-flow fans design and may easily result in some violation of the requirements of a good aerodynamic blade design. In order to circumvent this problem, in the present paper, a solution to the radial equilibrium equation is found which minimizes the outlet kinetic energy and fulfills the aerodynamic constraints, thus ensuring that the resulting blade has acceptable aerodynamic performance. The presented method is based on the optimization of a three-parameters vortex law and of the meridional channel size. The aerodynamic quantities to be employed as constraints are individuated and their suitable ranges of variation are proposed. The method is validated by means of a design with critical input data values and CFD analysis. Then, by means of systematic computations with different input data sets, some correlations and charts are obtained which are analogous to classic correlations based on statistical investigations on existing machines. Such new correlations help size a fan of given characteristics as well as study the feasibility of a given design.

  10. Input impedance of a probe-fed circular microstrip antenna with thick substrate

    NASA Technical Reports Server (NTRS)

    Davidovitz, M.; Lo, Y. T.

    1986-01-01

    A method of computing the input impedance for the probe fed circular microstrip antenna with thick dielectric substrate is presented. Utilizing the framework of the cavity model, the fields under the microstrip patch are expanded in a set of modes satisfying the boundary conditions on the eccentrically located probe, as well as on the cavity magnetic wall. A mode-matching technique is used to solve for the electric field at the junction between the cavity and the coaxial feed cable. The reflection coefficient of the transverse electromagnetic (TEM) mode incident in the coaxial cable is determined, from which the input impedance of the antenna is computed. Measured data are presented to verify the theoretical calculations. Results of the computation of various losses for the circular printed antenna as a function of substrate thickness are also included.

  11. Local Sensitivity of Predicted CO 2 Injectivity and Plume Extent to Model Inputs for the FutureGen 2.0 site

    DOE PAGES

    Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...

    2014-12-31

    Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less

  12. Analytical sizing methods for behind-the-meter battery storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Kintner-Meyer, Michael; Yang, Tao

    In behind-the-meter application, battery storage system (BSS) is utilized to reduce a commercial or industrial customer’s payment for electricity use, including energy charge and demand charge. The potential value of BSS in payment reduction and the most economic size can be determined by formulating and solving standard mathematical programming problems. In this method, users input system information such as load profiles, energy/demand charge rates, and battery characteristics to construct a standard programming problem that typically involve a large number of constraints and decision variables. Such a large scale programming problem is then solved by optimization solvers to obtain numerical solutions.more » Such a method cannot directly link the obtained optimal battery sizes to input parameters and requires case-by-case analysis. In this paper, we present an objective quantitative analysis of costs and benefits of customer-side energy storage, and thereby identify key factors that affect battery sizing. Based on the analysis, we then develop simple but effective guidelines that can be used to determine the most cost-effective battery size or guide utility rate design for stimulating energy storage development. The proposed analytical sizing methods are innovative, and offer engineering insights on how the optimal battery size varies with system characteristics. We illustrate the proposed methods using practical building load profile and utility rate. The obtained results are compared with the ones using mathematical programming based methods for validation.« less

  13. Extended linear regime of cavity-QED enhanced optical circular birefringence induced by a charged quantum dot

    NASA Astrophysics Data System (ADS)

    Hu, C. Y.; Rarity, J. G.

    2015-02-01

    Giant optical Faraday rotation (GFR) and giant optical circular birefringence (GCB) induced by a single quantum-dot spin in an optical microcavity can be regarded as linear effects in the weak-excitation approximation if the input field lies in the low-power limit [Hu et al., Phys. Rev. B 78, 085307 (2008), 10.1103/PhysRevB.78.085307; Hu et al., Phys. Rev. B 80, 205326 (2009), 10.1103/PhysRevB.80.205326]. In this work, we investigate the transition from the weak-excitation approximation moving into the saturation regime comparing a semiclassical approximation with the numerical results from a quantum optics toolbox [Tan, J. Opt. B 1, 424 (1999), 10.1088/1464-4266/1/4/312]. We find that the GFR and GCB around the cavity resonance in the strong-coupling regime are input field independent at intermediate powers and can be well described by the semiclassical approximation. Those associated with the dressed state resonances in the strong-coupling regime or merging with the cavity resonance in the Purcell regime are sensitive to input field at intermediate powers, and cannot be well described by the semiclassical approximation due to the quantum-dot saturation. As the GFR and GCB around the cavity resonance are relatively immune to the saturation effects, the rapid readout of single-electron spins can be carried out with coherent state and other statistically fluctuating light fields. This also shows that high-speed quantum entangling gates, robust against input power variations, can be built exploiting these linear effects.

  14. Sonochemiluminescence observation of lipid- and polymer-shelled ultrasound contrast agents in 1.2 MHz focused ultrasound field.

    PubMed

    Qiao, Yangzi; Cao, Hua; Zhang, Shusheng; Yin, Hui; Wan, Mingxi

    2013-01-01

    Ultrasound contrast agents (UCAs) are frequently added into the focused ultrasound field as cavitation nuclei to enhance the therapeutic efficiency. Since their presence will distort the pressure field and make the process unpredictable, comprehension of their behaviors especially the active zone spatial distribution is an important part of better monitoring and using of UCAs. As shell materials can strongly alter the acoustic behavior of UCAs, two different shells coated UCAs, lipid-shelled and polymer-shelled UCAs, in a 1.2 MHz focused ultrasound field were studied by the Sonochemiluminescence (SCL) method and compared. The SCL spatial distribution of lipid-shelled group differed from that of polymer-shelled group. The shell material and the character of focused ultrasound field work together to the SCL distribution, causing the lipid-shelled group to have a maximum SCL intensity in pre-focal region at lower input power than that of polymer-shelled group, and a brighter SCL intensity in post-focal region at high input power. The SCL inactive area of these two groups both increased with the input power. The general behavior of the UCAs can be studied by both the average SCL intensity and the backscatter signals. As polymer-shelled UCAs are more resistant to acoustic pressure, they had a higher destruction power and showed less reactivation than lipid-shelled ones. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Thermal conductivity model for powdered materials under vacuum based on experimental studies

    NASA Astrophysics Data System (ADS)

    Sakatani, N.; Ogawa, K.; Iijima, Y.; Arakawa, M.; Honda, R.; Tanaka, S.

    2017-01-01

    The thermal conductivity of powdered media is characteristically very low in vacuum, and is effectively dependent on many parameters of their constituent particles and packing structure. Understanding of the heat transfer mechanism within powder layers in vacuum and theoretical modeling of their thermal conductivity are of great importance for several scientific and engineering problems. In this paper, we report the results of systematic thermal conductivity measurements of powdered media of varied particle size, porosity, and temperature under vacuum using glass beads as a model material. Based on the obtained experimental data, we investigated the heat transfer mechanism in powdered media in detail, and constructed a new theoretical thermal conductivity model for the vacuum condition. This model enables an absolute thermal conductivity to be calculated for a powder with the input of a set of powder parameters including particle size, porosity, temperature, and compressional stress or gravity, and vice versa. Our model is expected to be a competent tool for several scientific and engineering fields of study related to powders, such as the thermal infrared observation of air-less planetary bodies, thermal evolution of planetesimals, and performance of thermal insulators and heat storage powders.

  16. Remote-Sensing Estimation of Phytoplankton Size Classes From GOCI Satellite Measurements in Bohai Sea and Yellow Sea

    NASA Astrophysics Data System (ADS)

    Sun, Deyong; Huan, Yu; Qiu, Zhongfeng; Hu, Chuanmin; Wang, Shengqiang; He, Yijun

    2017-10-01

    Phytoplankton size class (PSC), a measure of different phytoplankton functional and structural groups, is a key parameter to the understanding of many marine ecological and biogeochemical processes. In turbid waters where optical properties may be influenced by terrigenous discharge and nonphytoplankton water constituents, remote estimation of PSC is still a challenging task. Here based on measurements of phytoplankton diagnostic pigments, total chlorophyll a, and spectral reflectance in turbid waters of Bohai Sea and Yellow Sea during summer 2015, a customized model is developed and validated to estimate PSC in the two semienclosed seas. Five diagnostic pigments determined through high-performance liquid chromatography (HPLC) measurements are first used to produce weighting factors to model phytoplankton biomass (using total chlorophyll a as a surrogate) with relatively high accuracies. Then, a common method used to calculate contributions of microphytoplankton, nanophytoplankton, and picophytoplankton to the phytoplankton assemblage (i.e., Fm, Fn, and Fp) is customized using local HPLC and other data. Exponential functions are tuned to model the size-specific chlorophyll a concentrations (Cm, Cn, and Cp for microphytoplankton, nanophytoplankton, and picophytoplankton, respectively) with remote-sensing reflectance (Rrs) and total chlorophyll a as the model inputs. Such a PSC model shows two improvements over previous models: (1) a practical strategy (i.e., model Cp and Cn first, and then derive Cm as C-Cp-Cn) with an optimized spectral band (680 nm) for Rrs as the model input; (2) local parameterization, including a local chlorophyll a algorithm. The performance of the PSC model is validated using in situ data that were not used in the model development. Application of the PSC model to GOCI (Geostationary Ocean Color Imager) data leads to spatial and temporal distribution patterns of phytoplankton size classes (PSCs) that are consistent with results reported from field measurements by other researchers. While the applicability of the PSC model together with its parameterization to other optically complex regions and to other seasons is unknown, the findings of this study suggest that the approach to develop such a model may be extendable to other cases as long as local data are used to select the optimal band and to determine the model coefficients.

  17. Spatial interpolation of hourly precipitation and dew point temperature for the identification of precipitation phase and hydrologic response in a mountainous catchment

    NASA Astrophysics Data System (ADS)

    Garen, D. C.; Kahl, A.; Marks, D. G.; Winstral, A. H.

    2012-12-01

    In mountainous catchments, it is well known that meteorological inputs, such as precipitation, air temperature, humidity, etc. vary greatly with elevation, spatial location, and time. Understanding and monitoring catchment inputs is necessary in characterizing and predicting hydrologic response to these inputs. This is true all of the time, but it is the most dramatically critical during large storms, when the input to the stream system due to rain and snowmelt creates the potential for flooding. Besides such crisis events, however, proper estimation of catchment inputs and their spatial distribution is also needed in more prosaic but no less important water and related resource management activities. The first objective of this study is to apply a geostatistical spatial interpolation technique (elevationally detrended kriging) to precipitation and dew point temperature on an hourly basis and explore its characteristics, accuracy, and other issues. The second objective is to use these spatial fields to determine precipitation phase (rain or snow) during a large, dynamic winter storm. The catchment studied is the data-rich Reynolds Creek Experimental Watershed near Boise, Idaho. As part of this analysis, precipitation-elevation lapse rates are examined for spatial and temporal consistency. A clear dependence of lapse rate on precipitation amount exists. Certain stations, however, are outliers from these relationships, showing that significant local effects can be present and raising the question of whether such stations should be used for spatial interpolation. Experiments with selecting subsets of stations demonstrate the importance of elevation range and spatial placement on the interpolated fields. Hourly spatial fields of precipitation and dew point temperature are used to distinguish precipitation phase during a large rain-on-snow storm in December 2005. This application demonstrates the feasibility of producing hourly spatial fields and the importance of doing so to support an accurate determination of precipitation phase for assessing catchment hydrologic response to the storm.

  18. Eco-Stoichiometric Alterations in Paddy Soil Ecosystem Driven by Phosphorus Application

    PubMed Central

    Li, Xia; Wang, Hang; Gan, ShaoHua; Jiang, DaQian; Tian, GuangMing; Zhang, ZhiJian

    2013-01-01

    Agricultural fertilization may change processes of elemental biogeochemical cycles and alter the ecological function. Ecoenzymatic stoichiometric feature plays a critical role in global soil carbon (C) metabolism, driving element cycles, and mediating atmospheric composition in response to agricultural nutrient management. Despite the importance on crop growth, the role of phosphorous (P) in compliance with eco-stoichiometry on soil C and nitrogen (N) sequestration in the paddy field remains poorly understood in the context of climate change. Here, we collected soil samples from a field experiment after 6 years of chemical P application at a gradient of 0 (P-0), 30 (P-30), 60 (P-60), and 90 (P-90) kg ha−1 in order to evaluate the role of P on stoichiometric properties in terms of soil chemical, microbial biomass, and eco-enzyme activities as well as greenhouse gas (GHG: CO2, N2O and CH4) emissions. Continuous P input increased soil total organic C and N by 1.3–9.2% and 3%–13%, respectively. P input induced C and N limitations as indicated by the decreased ratio of C:P and N:P in the soil and microbial biomass. A synergistic mechanism among the ecoenzymatic stoichiometry, which regulated the ecological function of microbial C and N acquisition and were stoichiometrically related to P input, stimulated soil C and N sequestration in the paddy field. The lower emissions of N2O and CH4 under the higher P application (P-60 and P-90) in July and the insignificant difference in N2O emission in August compared to P-30; however, continuous P input enhanced CO2 fluxes for both samplings. There is a technical conflict for simultaneously regulating three types of GHGs in terms of the eco-stoichiometry mechanism under P fertilization. Thus, it is recommended that the P input in paddy fields not exceed 60 kg ha−1 may maximize soil C sequestration, minimize P export, and guarantee grain yields. PMID:23667435

  19. FY92 Progress Report for the Gyrotron Backward-Wave-Oscillator Experiment

    DTIC Science & Technology

    1993-07-01

    C. SAMPLE CABLE CALIBRATION 23 D. ASYST CHANNEL SETUPS 26 E. SAMPLE MAGNET INPUT DATA DECK FOR THE GYRO-BWO 32 F. SAMPLE EGUN INPUT DATA DECK FOR THE...of the first coil of the Helmholtz pair; zero also corresponds to the diode end of the experiment). Another computer code used was the EGUN code (Ref...a short computer program was written to superimpose the two magnetic fields; DC and Helmholtz). An example of an EGUN input data file is included in

  20. Validation of a virtual source model of medical linac for Monte Carlo dose calculation using multi-threaded Geant4

    NASA Astrophysics Data System (ADS)

    Aboulbanine, Zakaria; El Khayati, Naïma

    2018-04-01

    The use of phase space in medical linear accelerator Monte Carlo (MC) simulations significantly improves the execution time and leads to results comparable to those obtained from full calculations. The classical representation of phase space stores directly the information of millions of particles, producing bulky files. This paper presents a virtual source model (VSM) based on a reconstruction algorithm, taking as input a compressed file of roughly 800 kb derived from phase space data freely available in the International Atomic Energy Agency (IAEA) database. This VSM includes two main components; primary and scattered particle sources, with a specific reconstruction method developed for each. Energy spectra and other relevant variables were extracted from IAEA phase space and stored in the input description data file for both sources. The VSM was validated for three photon beams: Elekta Precise 6 MV/10 MV and a Varian TrueBeam 6 MV. Extensive calculations in water and comparisons between dose distributions of the VSM and IAEA phase space were performed to estimate the VSM precision. The Geant4 MC toolkit in multi-threaded mode (Geant4-[mt]) was used for fast dose calculations and optimized memory use. Four field configurations were chosen for dose calculation validation to test field size and symmetry effects, , , and for squared fields, and for an asymmetric rectangular field. Good agreement in terms of formalism, for 3%/3 mm and 2%/3 mm criteria, for each evaluated radiation field and photon beam was obtained within a computation time of 60 h on a single WorkStation for a 3 mm voxel matrix. Analyzing the VSM’s precision in high dose gradient regions, using the distance to agreement concept (DTA), showed also satisfactory results. In all investigated cases, the mean DTA was less than 1 mm in build-up and penumbra regions. In regards to calculation efficiency, the event processing speed is six times faster using Geant4-[mt] compared to sequential Geant4, when running the same simulation code for both. The developed VSM for 6 MV/10 MV beams widely used, is a general concept easy to adapt in order to reconstruct comparable beam qualities for various linac configurations, facilitating its integration for MC treatment planning purposes.

Top