Sample records for time points produced

  1. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  2. Investigating the Accuracy of Point Clouds Generated for Rock Surfaces

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.

    2016-12-01

    Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.

  3. Thickness noise of a propeller and its relation to blade sweep

    NASA Astrophysics Data System (ADS)

    Amiet, R. K.

    1988-07-01

    Linear acoustic theory is used to determine the thickness noise produced by a supersonic propeller with sharp leading and trailing edges. The method reveals details of the calculated waveform. Abrupt changes of slope in the pressure-time waveform which are produced by singular points entering or leaving the tip blade are pointed out. It is found that the behavior of the pressure-time waveform is closely related to changes in the retarded rotor shape. The results indicate that logarithmic singularities in the waveform are produced by regions on the blade edges that move towards the observer at sonic speed, with the edge normal to the line joining the source point and the observer.

  4. Microbial community changes in hydraulic fracturing fluids and produced water from shale gas extraction.

    PubMed

    Murali Mohan, Arvind; Hartsock, Angela; Bibby, Kyle J; Hammack, Richard W; Vidic, Radisav D; Gregory, Kelvin B

    2013-11-19

    Microbial communities associated with produced water from hydraulic fracturing are not well understood, and their deleterious activity can lead to significant increases in production costs and adverse environmental impacts. In this study, we compared the microbial ecology in prefracturing fluids (fracturing source water and fracturing fluid) and produced water at multiple time points from a natural gas well in southwestern Pennsylvania using 16S rRNA gene-based clone libraries, pyrosequencing, and quantitative PCR. The majority of the bacterial community in prefracturing fluids constituted aerobic species affiliated with the class Alphaproteobacteria. However, their relative abundance decreased in produced water with an increase in halotolerant, anaerobic/facultative anaerobic species affiliated with the classes Clostridia, Bacilli, Gammaproteobacteria, Epsilonproteobacteria, Bacteroidia, and Fusobacteria. Produced water collected at the last time point (day 187) consisted almost entirely of sequences similar to Clostridia and showed a decrease in bacterial abundance by 3 orders of magnitude compared to the prefracturing fluids and produced water samplesfrom earlier time points. Geochemical analysis showed that produced water contained higher concentrations of salts and total radioactivity compared to prefracturing fluids. This study provides evidence of long-term subsurface selection of the microbial community introduced through hydraulic fracturing, which may include significant implications for disinfection as well as reuse of produced water in future fracturing operations.

  5. Microbial Community Changes in Hydraulic Fracturing Fluids and Produced Water from Shale Gas Extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, Arvind Murali; Hartsock, Angela; Bibby, Kyle J

    2013-11-19

    Microbial communities associated with produced water from hydraulic fracturing are not well understood, and their deleterious activity can lead to significant increases in production costs and adverse environmental impacts. In this study, we compared the microbial ecology in prefracturing fluids (fracturing source water and fracturing fluid) and produced water at multiple time points from a natural gas well in southwestern Pennsylvania using 16S rRNA gene-based clone libraries, pyrosequencing, and quantitative PCR. The majority of the bacterial community in prefracturing fluids constituted aerobic species affiliated with the class Alphaproteobacteria. However, their relative abundance decreased in produced water with an increase inmore » halotolerant, anaerobic/facultative anaerobic species affiliated with the classes Clostridia, Bacilli, Gammaproteobacteria, Epsilonproteobacteria, Bacteroidia, and Fusobacteria. Produced water collected at the last time point (day 187) consisted almost entirely of sequences similar to Clostridia and showed a decrease in bacterial abundance by 3 orders of magnitude compared to the prefracturing fluids and produced water samplesfrom earlier time points. Geochemical analysis showed that produced water contained higher concentrations of salts and total radioactivity compared to prefracturing fluids. This study provides evidence of long-term subsurface selection of the microbial community introduced through hydraulic fracturing, which may include significant implications for disinfection as well as reuse of produced water in future fracturing operations.« less

  6. Change point detection of the Persian Gulf sea surface temperature

    NASA Astrophysics Data System (ADS)

    Shirvani, A.

    2017-01-01

    In this study, the Student's t parametric and Mann-Whitney nonparametric change point models (CPMs) were applied to detect change point in the annual Persian Gulf sea surface temperature anomalies (PGSSTA) time series for the period 1951-2013. The PGSSTA time series, which were serially correlated, were transformed to produce an uncorrelated pre-whitened time series. The pre-whitened PGSSTA time series were utilized as the input file of change point models. Both the applied parametric and nonparametric CPMs estimated the change point in the PGSSTA in 1992. The PGSSTA follow the normal distribution up to 1992 and thereafter, but with a different mean value after year 1992. The estimated slope of linear trend in PGSSTA time series for the period 1951-1992 was negative; however, that was positive after the detected change point. Unlike the PGSSTA, the applied CPMs suggested no change point in the Niño3.4SSTA time series.

  7. Volumetric Trends Associated with MR-guided Stereotactic Laser Amygdalohippocampectomy in Mesial Temporal Lobe Epilepsy

    PubMed Central

    Patel, Nitesh V; Sundararajan, Sri; Keller, Irwin; Danish, Shabbar

    2018-01-01

    Objective: Magnetic resonance (MR)-guided stereotactic laser amygdalohippocampectomy is a minimally invasive procedure for the treatment of refractory epilepsy in patients with mesial temporal sclerosis. Limited data exist on post-ablation volumetric trends associated with the procedure. Methods: 10 patients with mesial temporal sclerosis underwent MR-guided stereotactic laser amygdalohippocampectomy. Three independent raters computed ablation volumes at the following time points: pre-ablation (PreA), immediate post-ablation (IPA), 24 hours post-ablation (24PA), first follow-up post-ablation (FPA), and greater than three months follow-up post-ablation (>3MPA), using OsiriX DICOM Viewer (Pixmeo, Bernex, Switzerland). Statistical trends in post-ablation volumes were determined for the time points. Results: MR-guided stereotactic laser amygdalohippocampectomy produces a rapid rise and distinct peak in post-ablation volume immediately following the procedure. IPA volumes are significantly higher than all other time points. Comparing individual time points within each raters dataset (intra-rater), a significant difference was seen between the IPA time point and all others. There was no statistical difference between the 24PA, FPA, and >3MPA time points. A correlation analysis demonstrated the strongest correlations at the 24PA (r=0.97), FPA (r=0.95), and 3MPA time points (r=0.99), with a weaker correlation at IPA (r=0.92). Conclusion: MR-guided stereotactic laser amygdalohippocampectomy produces a maximal increase in post-ablation volume immediately following the procedure, which decreases and stabilizes at 24 hours post-procedure and beyond three months follow-up. Based on the correlation analysis, the lower inter-rater reliability at the IPA time point suggests it may be less accurate to assess volume at this time point. We recommend post-ablation volume assessments be made at least 24 hours post-selective ablation of the amygdalohippocampal complex (SLAH).

  8. The vectorization of a ray tracing program for image generation

    NASA Technical Reports Server (NTRS)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  9. Selective synthesis of human milk fat-style structured triglycerides from microalgal oil in a microfluidic reactor packed with immobilized lipase

    DOE PAGES

    Wang, Jun; Liu, Xi; Wang, Xu -Dong; ...

    2016-08-18

    Human milk fat-style structured triacylglycerols were produced from microalgal oil in a continuous microfluidic reactor packed with immobilized lipase for the first time. A remarkably high conversion efficiency was demonstrated in the microreactor with reaction time being reduced by 8 times, Michaelis constant decreased 10 times, the lipase reuse times increased 2.25-fold compared to those in a batch reactor. In addition, the content of palmitic acid at sn-2 position (89.0%) and polyunsaturated fatty acids at sn-1, 3 positions (81.3%) are slightly improved compared to the product in a batch reactor. The increase of melting points (1.7 °C) and decrease ofmore » crystallizing point (3 °C) implied higher quality product was produced using the microfluidic technology. The main cost can be reduced from 212.3 to 14.6 per batch with the microreactor. Altogether, the microfluidic bioconversion technology is promising for modified functional lipids production allowing for cost-effective approach to produce high-value microalgal coproducts.« less

  10. Selective synthesis of human milk fat-style structured triglycerides from microalgal oil in a microfluidic reactor packed with immobilized lipase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jun; Liu, Xi; Wang, Xu -Dong

    Human milk fat-style structured triacylglycerols were produced from microalgal oil in a continuous microfluidic reactor packed with immobilized lipase for the first time. A remarkably high conversion efficiency was demonstrated in the microreactor with reaction time being reduced by 8 times, Michaelis constant decreased 10 times, the lipase reuse times increased 2.25-fold compared to those in a batch reactor. In addition, the content of palmitic acid at sn-2 position (89.0%) and polyunsaturated fatty acids at sn-1, 3 positions (81.3%) are slightly improved compared to the product in a batch reactor. The increase of melting points (1.7 °C) and decrease ofmore » crystallizing point (3 °C) implied higher quality product was produced using the microfluidic technology. The main cost can be reduced from 212.3 to 14.6 per batch with the microreactor. Altogether, the microfluidic bioconversion technology is promising for modified functional lipids production allowing for cost-effective approach to produce high-value microalgal coproducts.« less

  11. Selective synthesis of human milk fat-style structured triglycerides from microalgal oil in a microfluidic reactor packed with immobilized lipase.

    PubMed

    Wang, Jun; Liu, Xi; Wang, Xu-Dong; Dong, Tao; Zhao, Xing-Yu; Zhu, Dan; Mei, Yi-Yuan; Wu, Guo-Hua

    2016-11-01

    Human milk fat-style structured triacylglycerols were produced from microalgal oil in a continuous microfluidic reactor packed with immobilized lipase for the first time. A remarkably high conversion efficiency was demonstrated in the microreactor with reaction time being reduced by 8 times, Michaelis constant decreased 10 times, the lipase reuse times increased 2.25-fold compared to those in a batch reactor. In addition, the content of palmitic acid at sn-2 position (89.0%) and polyunsaturated fatty acids at sn-1, 3 positions (81.3%) are slightly improved compared to the product in a batch reactor. The increase of melting points (1.7°C) and decrease of crystallizing point (3°C) implied higher quality product was produced using the microfluidic technology. The main cost can be reduced from $212.3 to $14.6 per batch with the microreactor. Overall, the microfluidic bioconversion technology is promising for modified functional lipids production allowing for cost-effective approach to produce high-value microalgal coproducts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Do parents lead their children by the hand?

    PubMed

    Ozçalişkan, Seyda; Goldin-Meadow, Susan

    2005-08-01

    The types of gesture+speech combinations children produce during the early stages of language development change over time. This change, in turn, predicts the onset of two-word speech and thus might reflect a cognitive transition that the child is undergoing. An alternative, however, is that the change merely reflects changes in the types of gesture + speech combinations that their caregivers produce. To explore this possibility, we videotaped 40 American child-caregiver dyads in their homes for 90 minutes when the children were 1;2, 1;6, and 1;10. Each gesture was classified according to type (deictic, conventional, representational) and the relation it held to speech (reinforcing, disambiguating, supplementary). Children and their caregivers produced the same types of gestures and in approximately the same distribution. However, the children differed from their caregivers in the way they used gesture in relation to speech. Over time, children produced many more REINFORCING (bike+point at bike), DISAMBIGUATING (that one+ point at bike), and SUPPLEMENTARY combinations (ride+point at bike). In contrast, the frequency and distribution of caregivers' gesture+speech combinations remained constant over time. Thus, the changing relation between gesture and speech observed in the children cannot be traced back to the gestural input the children receive. Rather, it appears to reflect changes in the children's own skills, illustrating once again gesture's ability to shed light on developing cognitive and linguistic processes.

  13. Split delivery vehicle routing problem with time windows: a case study

    NASA Astrophysics Data System (ADS)

    Latiffianti, E.; Siswanto, N.; Firmandani, R. A.

    2018-04-01

    This paper aims to implement an extension of VRP so called split delivery vehicle routing problem (SDVRP) with time windows in a case study involving pickups and deliveries of workers from several points of origin and several destinations. Each origin represents a bus stop and the destination represents either site or office location. An integer linear programming of the SDVRP problem is presented. The solution was generated using three stages of defining the starting points, assigning busses, and solving the SDVRP with time windows using an exact method. Although the overall computational time was relatively lengthy, the results indicated that the produced solution was better than the existing routing and scheduling that the firm used. The produced solution was also capable of reducing fuel cost by 9% that was obtained from shorter total distance travelled by the shuttle buses.

  14. Human self-control and the density of reinforcement

    PubMed Central

    Flora, Stephen R.; Pavlik, William B.

    1992-01-01

    Choice responding in adult humans on a discrete-trial button-pressing task was examined as a function of amount, delay, and overall density (points per unit time) of reinforcement. Reinforcement consisted of points that were exchangeable for money. In T 0 conditions, an impulsive response produced 4 points immediately and a self-control response produced 10 points after a delay of 15 s. In T 15 conditions, a constant delay of 15 s was added to both prereinforcer delays. Postreinforcer delays, which consisted of 15 s added to the end of each impulsive trial, equated trial durations regardless of choice, and was manipulated in both T 0 and T 15 conditions. In all conditions, choice was predicted directly from the relative reinforcement densities of the alternatives. Self-control was observed in all conditions except T 0 without postreinforcer delays, where the impulsive choices produced the higher reinforcement density. These results support previous studies showing that choice is a direct function of the relative reinforcement densities when conditioned (point) reinforcers are used. In contrast, where responding produces intrinsic (immediately consumable) reinforcers, immediacy of reinforcement appears to account for preference when density does not. PMID:16812652

  15. Precision pointing and tracking through random media by exploitation of the enhanced backscatter phenomenon.

    PubMed

    Harvey, J E; Reddy, S P; Phillips, R L

    1996-07-20

    The active illumination of a target through a turbulent medium with a monostatic transmitter-receiver results in a naturally occurring conjugate wave caused by reciprocal scattering paths that experience identical phase variations. This reciprocal path-scattering phenomenon produces an enhanced backscatter in the retroverse direction (precisely along the boresight of the pointing telescope). A dual aperture causes this intensity enhancement to take the form of Young's interference fringes. Interference fringes produced by the reciprocal path-scattering phenomenon are temporally stable even in the presence of time-varying turbulence. Choosing the width-to-separation ratio of the dual apertures appropriately and utilizing orthogonal polarizations to suppress the time-varying common-path scattered radiation allow one to achieve interferometric sensitivity in pointing accuracy through a random medium or turbulent atmosphere. Computer simulations are compared with laboratory experimental data. This new precision pointing and tracking technique has potential applications in ground-to-space laser communications, laser power beaming to satellites, and theater missile defense scenarios.

  16. cAMP levels in fast- and slow-twitch skeletal muscle after an acute bout of aerobic exercise

    NASA Technical Reports Server (NTRS)

    Sheldon, A.; Booth, F. W.; Kirby, C. R.

    1993-01-01

    The present study examined whether exercise duration was associated with elevated and/or sustained elevations of postexercise adenosine 3',5'-cyclic monophosphate (cAMP) by measuring cAMP levels in skeletal muscle for up to 4 h after acute exercise bouts of durations that are known to either produce (60 min) or not produce (10 min) mitochondrial proliferation after chronic training. Treadmill-acclimatized, but untrained, rats were run at 22 m/min for 0 (control), 10, or 60 min and were killed at various postexercise (0, 0.5, 1, 2, and 4 h) time points. Fast-twitch white and red (quadriceps) and slow-twitch (soleus) muscles were quickly excised, frozen in liquid nitrogen, and assayed for cAMP with a commercial kit. Unexpectedly, cAMP contents in all three muscles were similar to control (nonexercise) at most (21 of 30) time points after a single 10- or 60-min run. Values at 9 of 30 time points were significantly different from control (P < 0.05); i.e., 3 time points were significantly higher than control and 6 were significantly less than control. These data suggest that the cAMP concentration of untrained skeletal muscle after a single bout of endurance-type exercise is not, by itself, associated with exercise duration.

  17. Phased-array ultrasonic surface contour mapping system and method for solids hoppers and the like

    DOEpatents

    Fasching, George E.; Smith, Jr., Nelson S.

    1994-01-01

    A real time ultrasonic surface contour mapping system is provided including a digitally controlled phased-array of transmitter/receiver (T/R) elements located in a fixed position above the surface to be mapped. The surface is divided into a predetermined number of pixels which are separately scanned by an arrangement of T/R elements by applying phase delayed signals thereto that produce ultrasonic tone bursts from each T/R that arrive at a point X in phase and at the same time relative to the leading edge of the tone burst pulse so that the acoustic energies from each T/R combine in a reinforcing manner at point X. The signals produced by the reception of the echo signals reflected from point X back to the T/Rs are also delayed appropriately so that they add in phase at the input of a signal combiner. This combined signal is then processed to determine the range to the point X using density-corrected sound velocity values. An autofocusing signal is developed from the computed average range for a complete scan of the surface pixels. A surface contour map is generated in real time form the range signals on a video monitor.

  18. 40 CFR 430.45 - New source performance standards (NSPS).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Dissolving Sulfite... biocides: Subpart D [NSPS for dissolving sulfite pulp facilities where nitration grade pulp is produced... all times. Subpart D [NSPS for dissolving sulfite pulp facilities where viscose grade pulp is produced...

  19. Power in randomized group comparisons: the value of adding a single intermediate time point to a traditional pretest-posttest design.

    PubMed

    Venter, Anre; Maxwell, Scott E; Bolig, Erika

    2002-06-01

    Adding a pretest as a covariate to a randomized posttest-only design increases statistical power, as does the addition of intermediate time points to a randomized pretest-posttest design. Although typically 5 waves of data are required in this instance to produce meaningful gains in power, a 3-wave intensive design allows the evaluation of the straight-line growth model and may reduce the effect of missing data. The authors identify the statistically most powerful method of data analysis in the 3-wave intensive design. If straight-line growth is assumed, the pretest-posttest slope must assume fairly extreme values for the intermediate time point to increase power beyond the standard analysis of covariance on the posttest with the pretest as covariate, ignoring the intermediate time point.

  20. Arima model and exponential smoothing method: A comparison

    NASA Astrophysics Data System (ADS)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  1. Ex vivo 12 h bactericidal activity of oral co-amoxiclav (1.125 g) against beta-lactamase-producing Haemophilus influenzae.

    PubMed

    Bronner, S; Pompei, D; Elkhaïli, H; Dhoyen, N; Monteil, H; Jehl, F

    2001-10-01

    The aim of the study was to evaluate the in vitro/ex vivo bactericidal activity of a new coamoxiclav single-dose sachet formulation (1 g amoxicillin + 0.125 g clavulanic acid) against a beta-lactamase-producing strain of Haemophilus influenzae. The evaluation covered the 12 h period after antibiotic administration. Serum specimens from the 12 healthy volunteers included in the pharmacokinetic study were pooled by time point and in equal volumes. Eight of 12 pharmacokinetic sampling time points were included in the study. At time points 0.5, 0.75, 1, 1.5, 2.5, 5, 8 and 12 h post-dosing, the kinetics of bactericidal activity were determined for each of the serial dilutions. Each specimen was serially diluted from 1:2 to 1:256. The index of surviving bacteria (ISB) was subsequently determined for each pharmacokinetic time point. For all the serum samples, bactericidal activity was fast (3-6 h), marked (3-6 log(10) reduction in the initial inoculum) and sustained over the 12 h between-dosing interval. The results obtained also confirmed that the potency of the amoxicillin plus clavulanic acid combination was time dependent against the species under study and that the time interval over which the concentrations were greater than the MIC (t > MIC) was 100% for the strain under study. The data thus generated constitute an interesting prerequisite with a view to using co-amoxiclav 1.125 g in a bd oral regimen.

  2. Performance summary on a high power dense plasma focus x-ray lithography point source producing 70 nm line features in AlGaAs microcircuits

    NASA Astrophysics Data System (ADS)

    Petr, Rodney; Bykanov, Alexander; Freshman, Jay; Reilly, Dennis; Mangano, Joseph; Roche, Maureen; Dickenson, Jason; Burte, Mitchell; Heaton, John

    2004-08-01

    A high average power dense plasma focus (DPF), x-ray point source has been used to produce ˜70 nm line features in AlGaAs-based monolithic millimeter-wave integrated circuits (MMICs). The DPF source has produced up to 12 J per pulse of x-ray energy into 4π steradians at ˜1 keV effective wavelength in ˜2 Torr neon at pulse repetition rates up to 60 Hz, with an effective x-ray yield efficiency of ˜0.8%. Plasma temperature and electron concentration are estimated from the x-ray spectrum to be ˜170 eV and ˜5.1019 cm-3, respectively. The x-ray point source utilizes solid-state pulse power technology to extend the operating lifetime of electrodes and insulators in the DPF discharge. By eliminating current reversals in the DPF head, an anode electrode has demonstrated a lifetime of more than 5 million shots. The x-ray point source has also been operated continuously for 8 h run times at 27 Hz average pulse recurrent frequency. Measurements of shock waves produced by the plasma discharge indicate that overpressure pulses must be attenuated before a collimator can be integrated with the DPF point source.

  3. Mathematical embryology: the fluid mechanics of nodal cilia

    NASA Astrophysics Data System (ADS)

    Smith, D. J.; Smith, A. A.; Blake, J. R.

    2011-07-01

    Left-right symmetry breaking is critical to vertebrate embryonic development; in many species this process begins with cilia-driven flow in a structure termed the `node'. Primary `whirling' cilia, tilted towards the posterior, transport morphogen-containing vesicles towards the left, initiating left-right asymmetric development. We review recent theoretical models based on the point-force stokeslet and point-torque rotlet singularities, explaining how rotation and surface-tilt produce directional flow. Analysis of image singularity systems enforcing the no-slip condition shows how tilted rotation produces a far-field `stresslet' directional flow, and how time-dependent point-force and time-independent point-torque models are in this respect equivalent. Associated slender body theory analysis is reviewed; this approach enables efficient and accurate simulation of three-dimensional time-dependent flow, time-dependence being essential in predicting features of the flow such as chaotic advection, which have subsequently been determined experimentally. A new model for the nodal flow utilising the regularized stokeslet method is developed, to model the effect of the overlying Reichert's membrane. Velocity fields and particle paths within the enclosed domain are computed and compared with the flow profiles predicted by previous `membrane-less' models. Computations confirm that the presence of the membrane produces flow-reversal in the upper region, but no continuous region of reverse flow close to the epithelium. The stresslet far-field is no longer evident in the membrane model, due to the depth of the cavity being of similar magnitude to the cilium length. Simulations predict that vesicles released within one cilium length of the epithelium are generally transported to the left via a `loopy drift' motion, sometimes involving highly unpredictable detours around leftward cilia [truncated

  4. User and technical documentation

    NASA Astrophysics Data System (ADS)

    1988-09-01

    The program LIBRATE calculates velocities for trajectories from low earth orbit (LEO) to four of the five libration points (L2, L3, L4, and L5), and from low lunar orbit (LLO) to libration points L1 and L2. The flight to be analyzed departs from a circular orbit of any altitude and inclination about the Earth or Moon and finishes in a circular orbit about the Earth at the desired libration point within a specified flight time. This program produces a matrix of the delta V's needed to complete the desired flight. The user specifies the departure orbit, and the maximum flight time. A matrix is then developed with 10 inclinations, ranging from 0 to 90 degrees, forming the columns, and 19 possible flight times, ranging from the flight time (input) to 36 hours less than the input value, in decrements of 2 hours, forming the rows. This matrix is presented in three different reports including the total delta V's, and both of the delta V components discussed. The input required from the user to define the flight is discussed. The contents of the three reports that are produced as outputs are also described. The instructions are also included which are needed to execute the program.

  5. RADIATION WAVE DETECTOR

    DOEpatents

    Wouters, L.F.

    1958-10-28

    The detection of the shape and amplitude of a radiation wave is discussed, particularly an apparatus for automatically indicating at spaced lntervals of time the radiation intensity at a flxed point as a measure of a radiation wave passing the point. The apparatus utilizes a number of photomultiplier tubes surrounding a scintillation type detector, For obtainlng time spaced signals proportional to radiation at predetermined intervals the photolnultiplier tubes are actuated ln sequence following detector incidence of a predetermined radiation level by electronic means. The time spaced signals so produced are then separately amplified and relayed to recording means.

  6. End-point controller design for an experimental two-link flexible manipulator using convex optimization

    NASA Technical Reports Server (NTRS)

    Oakley, Celia M.; Barratt, Craig H.

    1990-01-01

    Recent results in linear controller design are used to design an end-point controller for an experimental two-link flexible manipulator. A nominal 14-state linear-quadratic-Gaussian (LQG) controller was augmented with a 528-tap finite-impulse-response (FIR) filter designed using convex optimization techniques. The resulting 278-state controller produced improved end-point trajectory tracking and disturbance rejection in simulation and experimentally in real time.

  7. Simulating living organisms with populations of point vortices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmieder, R.W.

    1995-07-01

    The author has found that time-averaged images of small populations of point vortices can exhibit motions suggestive of the behavior of individual organisms. As an example, the author shows that collections of point vortices confined in a box and subjected to heating can generate patterns that are broadly similar to interspecies defense in certain sea anemones. It is speculated that other simple dynamical systems can be found to produce similar complex organism-like behavior.

  8. Multi-point laser ignition device

    DOEpatents

    McIntyre, Dustin L.; Woodruff, Steven D.

    2017-01-17

    A multi-point laser device comprising a plurality of optical pumping sources. Each optical pumping source is configured to create pumping excitation energy along a corresponding optical path directed through a high-reflectivity mirror and into substantially different locations within the laser media thereby producing atomic optical emissions at substantially different locations within the laser media and directed along a corresponding optical path of the optical pumping source. An output coupler and one or more output lenses are configured to produce a plurality of lasing events at substantially different times, locations or a combination thereof from the multiple atomic optical emissions produced at substantially different locations within the laser media. The laser media is a single continuous media, preferably grown on a single substrate.

  9. A hierarchical model combining distance sampling and time removal to estimate detection probability during avian point counts

    USGS Publications Warehouse

    Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.

    2014-01-01

    Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point-within-transect and park-level effect. Our results suggest that this model can provide insight into the detection process during avian surveys and reduce bias in estimates of relative abundance but is best applied to surveys of species with greater availability (e.g., breeding songbirds).

  10. Contact angle of unset elastomeric impression materials.

    PubMed

    Menees, Timothy S; Radhakrishnan, Rashmi; Ramp, Lance C; Burgess, John O; Lawson, Nathaniel C

    2015-10-01

    Some elastomeric impression materials are hydrophobic, and it is often necessary to take definitive impressions of teeth coated with some saliva. New hydrophilic materials have been developed. The purpose of this in vitro study was to compare contact angles of water and saliva on 7 unset elastomeric impression materials at 5 time points from the start of mixing. Two traditional polyvinyl siloxane (PVS) (Aquasil, Take 1), 2 modified PVS (Imprint 4, Panasil), a polyether (Impregum), and 2 hybrid (Identium, EXA'lence) materials were compared. Each material was flattened to 2 mm and a 5 μL drop of distilled water or saliva was dropped on the surface at 25 seconds (t0) after the start of mix. Contact angle measurements were made with a digital microscope at initial contact (t0), t1=2 seconds, t2=5 seconds, t3=50% working time, and t4=95% working time. Data were analyzed with a generalized linear mixed model analysis, and individual 1-way ANOVA and Tukey HSD post hoc tests (α=.05). For water, materials grouped into 3 categories at all time-points: the modified PVS and one hybrid material (Identium) produced the lowest contact angles, the polyether material was intermediate, and the traditional PVS materials and the other hybrid (EXA'lence) produced the highest contact angles. For saliva, Identium, Impregum, and Imprint 4 were in the group with the lowest contact angle at most time points. Modified PVS materials and one of the hybrid materials are more hydrophilic than traditional PVS materials when measured with water. Saliva behaves differently than water in contact angle measurement on unset impression material and produces a lower contact angle on polyether based materials. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  11. The Charge Transfer Efficiency and Calibration of WFPC2

    NASA Technical Reports Server (NTRS)

    Dolphin, Andrew E.

    2000-01-01

    A new determination of WFPC2 photometric corrections is presented, using HSTphot reduction of the WFPC2 Omega Centauri and NGC 2419 observations from January 1994 through March 2000 and a comparison with ground-based photometry. No evidence is seen for any position-independent photometric offsets (the "long-short anomaly"); all systematic errors appear to be corrected with the CTE and zero point solution. The CTE loss time dependence is determined to be very significant in the Y direction, causing time-independent CTE solutions to be valid only for a small range of times. On average, the present solution produces corrections similar to Whitmore, Heyer, & Casertano, although with an improved functional form that produces less scatter in the residuals and determined with roughly a year of additional data. In addition to the CTE loss characterization, zero point corrections are also determined as functions of chip, gain, filter, and temperature. Of interest, there are chip-to-chip differences of order 0.01 - 0.02 magnitudes relative to the Holtzman et al. calibrations, and the present study provides empirical zero point determinations for the non-standard filters such as the frequently-used F450W, F606W, and F702W.

  12. Connections between Transcription Downstream of Genes and cis-SAGe Chimeric RNA.

    PubMed

    Chwalenia, Katarzyna; Qin, Fujun; Singh, Sandeep; Tangtrongstittikul, Panjapon; Li, Hui

    2017-11-22

    cis-Splicing between adjacent genes (cis-SAGe) is being recognized as one way to produce chimeric fusion RNAs. However, its detail mechanism is not clear. Recent study revealed induction of transcriptions downstream of genes (DoGs) under osmotic stress. Here, we investigated the influence of osmotic stress on cis-SAGe chimeric RNAs and their connection to DoGs. We found,the absence of induction of at least some cis-SAGe fusions and/or their corresponding DoGs at early time point(s). In fact, these DoGs and their cis-SAGe fusions are inversely correlated. This negative correlation was changed to positive at a later time point. These results suggest a direct competition between the two categories of transcripts when total pool of readthrough transcripts is limited at an early time point. At a later time point, DoGs and corresponding cis-SAGe fusions are both induced, indicating that total readthrough transcripts become more abundant. Finally, we observed overall enhancement of cis-SAGe chimeric RNAs in KCl-treated samples by RNA-Seq analysis.

  13. On the Motion of Agents across Terrain with Obstacles

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. V.

    2018-01-01

    The paper is devoted to finding the time optimal route of an agent travelling across a region from a given source point to a given target point. At each point of this region, a maximum allowed speed is specified. This speed limit may vary in time. The continuous statement of this problem and the case when the agent travels on a grid with square cells are considered. In the latter case, the time is also discrete, and the number of admissible directions of motion at each point in time is eight. The existence of an optimal solution of this problem is proved, and estimates of the approximate solution obtained on the grid are obtained. It is found that decreasing the size of cells below a certain limit does not further improve the approximation. These results can be used to estimate the quasi-optimal trajectory of the agent motion across the rugged terrain produced by an algorithm based on a cellular automaton that was earlier developed by the author.

  14. Hydraulic modeling of clay ceramic water filters for point-of-use water treatment.

    PubMed

    Schweitzer, Ryan W; Cunningham, Jeffrey A; Mihelcic, James R

    2013-01-02

    The acceptability of ceramic filters for point-of-use water treatment depends not only on the quality of the filtered water, but also on the quantity of water the filters can produce. This paper presents two mathematical models for the hydraulic performance of ceramic water filters under typical usage. A model is developed for two common filter geometries: paraboloid- and frustum-shaped. Both models are calibrated and evaluated by comparison to experimental data. The hydraulic models are able to predict the following parameters as functions of time: water level in the filter (h), instantaneous volumetric flow rate of filtrate (Q), and cumulative volume of water produced (V). The models' utility is demonstrated by applying them to estimate how the volume of water produced depends on factors such as the filter shape and the frequency of filling. Both models predict that the volume of water produced can be increased by about 45% if users refill the filter three times per day versus only once per day. Also, the models predict that filter geometry affects the volume of water produced: for two filters with equal volume, equal wall thickness, and equal hydraulic conductivity, a filter that is tall and thin will produce as much as 25% more water than one which is shallow and wide. We suggest that the models can be used as tools to help optimize filter performance.

  15. Fiber optic sensor employing successively destroyed coupled points or reflectors for detecting shock wave speed and damage location

    DOEpatents

    Weiss, Jonathan D.

    1995-01-01

    A shock velocity and damage location sensor providing a means of measuring shock speed and damage location. The sensor consists of a long series of time-of-arrival "points" constructed with fiber optics. The fiber optic sensor apparatus measures shock velocity as the fiber sensor is progressively crushed as a shock wave proceeds in a direction along the fiber. The light received by a receiving means changes as time-of-arrival points are destroyed as the sensor is disturbed by the shock. The sensor may comprise a transmitting fiber bent into a series of loops and fused to a receiving fiber at various places, time-of-arrival points, along the receiving fibers length. At the "points" of contact, where a portion of the light leaves the transmitting fiber and enters the receiving fiber, the loops would be required to allow the light to travel backwards through the receiving fiber toward a receiving means. The sensor may also comprise a single optical fiber wherein the time-of-arrival points are comprised of reflection planes distributed along the fibers length. In this configuration, as the shock front proceeds along the fiber it destroys one reflector after another. The output received by a receiving means from this sensor may be a series of downward steps produced as the shock wave destroys one time-of-arrival point after another, or a nonsequential pattern of steps in the event time-of-arrival points are destroyed at any point along the sensor.

  16. Fiber optic sensor employing successively destroyed coupled points or reflectors for detecting shock wave speed and damage location

    DOEpatents

    Weiss, J.D.

    1995-08-29

    A shock velocity and damage location sensor providing a means of measuring shock speed and damage location is disclosed. The sensor consists of a long series of time-of-arrival ``points`` constructed with fiber optics. The fiber optic sensor apparatus measures shock velocity as the fiber sensor is progressively crushed as a shock wave proceeds in a direction along the fiber. The light received by a receiving means changes as time-of-arrival points are destroyed as the sensor is disturbed by the shock. The sensor may comprise a transmitting fiber bent into a series of loops and fused to a receiving fiber at various places, time-of-arrival points, along the receiving fibers length. At the ``points`` of contact, where a portion of the light leaves the transmitting fiber and enters the receiving fiber, the loops would be required to allow the light to travel backwards through the receiving fiber toward a receiving means. The sensor may also comprise a single optical fiber wherein the time-of-arrival points are comprised of reflection planes distributed along the fibers length. In this configuration, as the shock front proceeds along the fiber it destroys one reflector after another. The output received by a receiving means from this sensor may be a series of downward steps produced as the shock wave destroys one time-of-arrival point after another, or a nonsequential pattern of steps in the event time-of-arrival points are destroyed at any point along the sensor. 6 figs.

  17. An analytical model for the calculation of the change in transmembrane potential produced by an ultrawideband electromagnetic pulse.

    PubMed

    Hart, Francis X; Easterly, Clay E

    2004-05-01

    The electric field pulse shape and change in transmembrane potential produced at various points within a sphere by an intense, ultrawideband pulse are calculated in a four stage, analytical procedure. Spheres of two sizes are used to represent the head of a human and the head of a rat. In the first stage, the pulse is decomposed into its Fourier components. In the second stage, Mie scattering analysis (MSA) is performed for a particular point in the sphere on each of the Fourier components, and the resulting electric field pulse shape is obtained for that point. In the third stage, the long wavelength approximation (LWA) is used to obtain the change in transmembrane potential in a cell at that point. In the final stage, an energy analysis is performed. These calculations are performed at 45 points within each sphere. Large electric fields and transmembrane potential changes on the order of a millivolt are produced within the brain, but on a time scale on the order of nanoseconds. The pulse shape within the brain differs considerably from that of the incident pulse. Comparison of the results for spheres of different sizes indicates that scaling of such pulses across species is complicated. Published 2004 Wiley-Liss, Inc.

  18. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  19. The four fixed points of scale invariant single field cosmological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, BingKan, E-mail: bxue@princeton.edu

    2012-10-01

    We introduce a new set of flow parameters to describe the time dependence of the equation of state and the speed of sound in single field cosmological models. A scale invariant power spectrum is produced if these flow parameters satisfy specific dynamical equations. We analyze the flow of these parameters and find four types of fixed points that encompass all known single field models. Moreover, near each fixed point we uncover new models where the scale invariance of the power spectrum relies on having simultaneously time varying speed of sound and equation of state. We describe several distinctive new modelsmore » and discuss constraints from strong coupling and superluminality.« less

  20. A Fast Implementation of the ISODATA Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2005-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  1. A Fast Implementation of the Isodata Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Le Moigne, Jacqueline; Mount, David M.; Netanyahu, Nathan S.

    2007-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to IsoDATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  2. Electronic method for autofluorography of macromolecules on two-D matrices

    DOEpatents

    Davidson, Jackson B.; Case, Arthur L.

    1983-01-01

    A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100-1000 times.

  3. Proton radiography and proton computed tomography based on time-resolved dose measurements

    NASA Astrophysics Data System (ADS)

    Testa, Mauro; Verburg, Joost M.; Rose, Mark; Min, Chul Hee; Tang, Shikui; Hassane Bentefour, El; Paganetti, Harald; Lu, Hsiao-Ming

    2013-11-01

    We present a proof of principle study of proton radiography and proton computed tomography (pCT) based on time-resolved dose measurements. We used a prototype, two-dimensional, diode-array detector capable of fast dose rate measurements, to acquire proton radiographic images expressed directly in water equivalent path length (WEPL). The technique is based on the time dependence of the dose distribution delivered by a proton beam traversing a range modulator wheel in passive scattering proton therapy systems. The dose rate produced in the medium by such a system is periodic and has a unique pattern in time at each point along the beam path and thus encodes the WEPL. By measuring the time dose pattern at the point of interest, the WEPL to this point can be decoded. If one measures the time-dose patterns at points on a plane behind the patient for a beam with sufficient energy to penetrate the patient, the obtained 2D distribution of the WEPL forms an image. The technique requires only a 2D dosimeter array and it uses only the clinical beam for a fraction of second with negligible dose to patient. We first evaluated the accuracy of the technique in determining the WEPL for static phantoms aiming at beam range verification of the brain fields of medulloblastoma patients. Accurate beam ranges for these fields can significantly reduce the dose to the cranial skin of the patient and thus the risk of permanent alopecia. Second, we investigated the potential features of the technique for real-time imaging of a moving phantom. Real-time tumor tracking by proton radiography could provide more accurate validations of tumor motion models due to the more sensitive dependence of proton beam on tissue density compared to x-rays. Our radiographic technique is rapid (˜100 ms) and simultaneous over the whole field, it can image mobile tumors without the problem of interplay effect inherently challenging for methods based on pencil beams. Third, we present the reconstructed pCT images of a cylindrical phantom containing inserts of different materials. As for all conventional pCT systems, the method illustrated in this work produces tomographic images that are potentially more accurate than x-ray CT in providing maps of proton relative stopping power (RSP) in the patient without the need for converting x-ray Hounsfield units to proton RSP. All phantom tests produced reasonable results, given the currently limited spatial and time resolution of the prototype detector. The dose required to produce one radiographic image, with the current settings, is ˜0.7 cGy. Finally, we discuss a series of techniques to improve the resolution and accuracy of radiographic and tomographic images for the future development of a full-scale detector.

  4. Is Plagiarism Changing over Time? A 10-Year Time-Lag Study with Three Points of Measurement

    ERIC Educational Resources Information Center

    Curtis, Guy J.; Vardanega, Lucia

    2016-01-01

    Are more students cheating on assessment tasks in higher education? Despite ongoing media speculation concerning increased "copying and pasting" and ghostwritten assignments produced by "paper mills", few studies have charted historical trends in rates and types of plagiarism. Additionally, there has been little comment from…

  5. A Two-Step Approach for Producing an Ultrafine-Grain Structure in Cu-30Zn Brass (Postprint)

    DTIC Science & Technology

    2015-08-13

    crystallization anneal at 400 °C (0.55Tm, where Tm is the melting point ) for times ranging from 1 min to 10 hours, followed by water quenching; an additional...200 words) A two-step approach involving cryogenic rolling and subsequent recrystallization annealing was developed to produce an ultrafine-grain...b s t r a c t A two-step approach involving cryogenic rolling and subsequent recrystallization annealing was devel- oped to produce an ultrafine

  6. Measuring Diameters Of Large Vessels

    NASA Technical Reports Server (NTRS)

    Currie, James R.; Kissel, Ralph R.; Oliver, Charles E.; Smith, Earnest C.; Redmon, John W., Sr.; Wallace, Charles C.; Swanson, Charles P.

    1990-01-01

    Computerized apparatus produces accurate results quickly. Apparatus measures diameter of tank or other large cylindrical vessel, without prior knowledge of exact location of cylindrical axis. Produces plot of inner circumference, estimate of true center of vessel, data on radius, diameter of best-fit circle, and negative and positive deviations of radius from circle at closely spaced points on circumference. Eliminates need for time-consuming and error-prone manual measurements.

  7. A post-processing algorithm for time domain pitch trackers

    NASA Astrophysics Data System (ADS)

    Specker, P.

    1983-01-01

    This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.

  8. Distinct Neurochemical Adaptations Within the Nucleus Accumbens Produced by a History of Self-Administered vs Non-Contingently Administered Intravenous Methamphetamine

    PubMed Central

    Lominac, Kevin D; Sacramento, Arianne D; Szumlinski, Karen K; Kippin, Tod E

    2012-01-01

    Methamphetamine is a highly addictive psychomotor stimulant yet the neurobiological consequences of methamphetamine self-administration remain under-characterized. Thus, we employed microdialysis in rats trained to self-administer intravenous (IV) infusions of methamphetamine (METH-SA) or saline (SAL) and a group of rats receiving non-contingent IV infusions of methamphetamine (METH-NC) at 1 or 21 days withdrawal to determine the dopamine and glutamate responses in the nucleus accumbens (NAC) to a 2 mg/kg methamphetamine intraperitoneal challenge. Furthermore, basal NAC extracellular glutamate content was assessed employing no net-flux procedures in these three groups at both time points. At both 1- and 21-day withdrawal points, methamphetamine elicited a rise in extracellular dopamine in SAL animals and this effect was sensitized in METH-NC rats. However, METH-SA animals showed a much greater sensitized dopamine response to the drug challenge compared with the other groups. Additionally, acute methamphetamine decreased extracellular glutamate in both SAL and METH-NC animals at both time-points. In contrast, METH-SA rats exhibited a modest and delayed rise in glutamate at 1-day withdrawal and this rise was sensitized at 21 days withdrawal. Finally, no net-flux microdialysis revealed elevated basal glutamate and increased extraction fraction at both withdrawal time-points in METH-SA rats. Although METH-NC rats exhibited no change in the glutamate extraction fraction, they exhibited a time-dependent elevation in basal glutamate levels. These data illustrate for the first time that a history of methamphetamine self-administration produces enduring changes in NAC neurotransmission and that non-pharmacological factors have a critical role in the expression of these methamphetamine-induced neurochemical adaptations. PMID:22030712

  9. Robust estimation of pulse wave transit time using group delay.

    PubMed

    Meloni, Antonella; Zymeski, Heather; Pepe, Alessia; Lombardi, Massimo; Wood, John C

    2014-03-01

    To evaluate the efficiency of a novel transit time (Δt) estimation method from cardiovascular magnetic resonance flow curves. Flow curves were estimated from phase contrast images of 30 patients. Our method (TT-GD: transit time group delay) operates in the frequency domain and models the ascending aortic waveform as an input passing through a discrete-component "filter," producing the observed descending aortic waveform. The GD of the filter represents the average time delay (Δt) across individual frequency bands of the input. This method was compared with two previously described time-domain methods: TT-point using the half-maximum of the curves and TT-wave using cross-correlation. High temporal resolution flow images were studied at multiple downsampling rates to study the impact of differences in temporal resolution. Mean Δts obtained with the three methods were comparable. The TT-GD method was the most robust to reduced temporal resolution. While the TT-GD and the TT-wave produced comparable results for velocity and flow waveforms, the TT-point resulted in significant shorter Δts when calculated from velocity waveforms (difference: 1.8±2.7 msec; coefficient of variability: 8.7%). The TT-GD method was the most reproducible, with an intraobserver variability of 3.4% and an interobserver variability of 3.7%. Compared to the traditional TT-point and TT-wave methods, the TT-GD approach was more robust to the choice of temporal resolution, waveform type, and observer. Copyright © 2013 Wiley Periodicals, Inc.

  10. A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.

    1991-01-01

    A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.

  11. 14 CFR Appendix G to Part 417 - Natural and Triggered Lightning Flight Commit Criteria

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... time. A cumulus cloud formed locally and a cirrus layer that is physically separated from that cumulus... launch point at the same time. Bright band means an enhancement of radar reflectivity caused by frozen.... Cloud means a visible mass of water droplets or ice crystals produced by condensation of water vapor in...

  12. 14 CFR Appendix G to Part 417 - Natural and Triggered Lightning Flight Commit Criteria

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... time. A cumulus cloud formed locally and a cirrus layer that is physically separated from that cumulus... launch point at the same time. Bright band means an enhancement of radar reflectivity caused by frozen.... Cloud means a visible mass of water droplets or ice crystals produced by condensation of water vapor in...

  13. The interaction between atomic displacement cascades and tilt symmetrical grain boundaries in α-zirconium

    NASA Astrophysics Data System (ADS)

    Kapustin, P.; Svetukhin, V.; Tikhonchev, M.

    2017-06-01

    The atomic displacement cascade simulations near symmetric tilt grain boundaries (GBs) in hexagonal close packed-Zirconium were considered in this paper. Further defect structure analysis was conducted. Four symmetrical tilt GBs -∑14?, ∑14? with the axis of rotation [0 0 0 1] and ∑32?, ∑32? with the axis of rotation ? - were considered. The molecular dynamics method was used for atomic displacement cascades' simulation. A tendency of the point defects produced in the cascade to accumulate near the GB plane, which was an obstacle to the spread of the cascade, was discovered. The results of the point defects' clustering produced in the cascade were obtained. The clusters of both types were represented mainly by single point defects. At the same time, vacancies formed clusters of a large size (more than 20 vacancies per cluster), while self-interstitial atom clusters were small-sized.

  14. Chronic Ethanol Exposure Produces Time- and Brain Region-Dependent Changes in Gene Coexpression Networks

    PubMed Central

    Osterndorff-Kahanek, Elizabeth A.; Becker, Howard C.; Lopez, Marcelo F.; Farris, Sean P.; Tiwari, Gayatri R.; Nunez, Yury O.; Harris, R. Adron; Mayfield, R. Dayne

    2015-01-01

    Repeated ethanol exposure and withdrawal in mice increases voluntary drinking and represents an animal model of physical dependence. We examined time- and brain region-dependent changes in gene coexpression networks in amygdala (AMY), nucleus accumbens (NAC), prefrontal cortex (PFC), and liver after four weekly cycles of chronic intermittent ethanol (CIE) vapor exposure in C57BL/6J mice. Microarrays were used to compare gene expression profiles at 0-, 8-, and 120-hours following the last ethanol exposure. Each brain region exhibited a large number of differentially expressed genes (2,000-3,000) at the 0- and 8-hour time points, but fewer changes were detected at the 120-hour time point (400-600). Within each region, there was little gene overlap across time (~20%). All brain regions were significantly enriched with differentially expressed immune-related genes at the 8-hour time point. Weighted gene correlation network analysis identified modules that were highly enriched with differentially expressed genes at the 0- and 8-hour time points with virtually no enrichment at 120 hours. Modules enriched for both ethanol-responsive and cell-specific genes were identified in each brain region. These results indicate that chronic alcohol exposure causes global ‘rewiring‘ of coexpression systems involving glial and immune signaling as well as neuronal genes. PMID:25803291

  15. Application of Time-Frequency Representations To Non-Stationary Radar Cross Section

    DTIC Science & Technology

    2009-03-01

    The three- dimensional plot produced by a TFR allows one to determine which spectral components of a signal vary with time [25... a range bin ( of width cT 2 ) from the stepped frequency waveform. 2. Cancel the clutter (stationary components) by zeroing out points associated with ...generating an infinite number of bilinear Time Frequency distributions based on a generalized equation and a change- able

  16. Gas release and conductivity modification studies

    NASA Technical Reports Server (NTRS)

    Linson, L. M.; Baxter, D. C.

    1979-01-01

    The behavior of gas clouds produced by releases from orbital velocity in either a point release or venting mode is described by the modification of snowplow equations valid in an intermediate altitude regime. Quantitative estimates are produced for the time dependence of the radius of the cloud, the average internal energy, the translational velocity, and the distance traveled. The dependence of these quantities on the assumed density profile, the internal energy of the gas, and the ratio of specific heats is examined. The new feature is the inclusion of the effect of the large orbital velocity. The resulting gas cloud models are used to calculate the characteristics of the field line integrated Pedersen conductivity enhancements that would be produced by the release of barium thermite at orbital velocity in either the point release or venting modes as a function of release altitude and chemical payload weight.

  17. Subprimal purchasing and merchandising decisions for pork: relationship to retail value.

    PubMed

    Lorenzen, C L; Walter, J P; Dockerty, T R; Griffin, D B; Johnson, H K; Savell, J W

    1996-01-01

    To assess retail value and profitability, cutting test data were obtained in a simulated retail cutting room for boxed pork subprimals, bone-in loins (n = 180), boneless loins (n = 94), Boston butts (n = 148), fresh hams (n = 28), and boneless hams (n = 23). Processing times (seconds) and retail weights (kilograms) were used to determine relative value. Cutting style affected (P < .05) value differential (US$/subprimal) for bone-in and boneless loins. When cutting styles within subprimals were pooled, value differential was affected (P < .05) by purchasing specification for bone-in loins, boneless loins, Boston butts, and inside fresh hams. Processing bone-in loins to a boneless end point produced a greater (P < .05) value differential and percentage of gross margin than a bone-in retail end point. Bone-in loins fabricated to a boneless retail end point produced a greater (P < .05) value differential and percentage of gross margin than boneless loins fabricated to the same end point. The increase in retail value can be attributed to the increased number and weight of retail cuts produced from bone-in loins. The thick, boneless loin cutting style produced a greater (P < .05) value differential and percentage of gross margin as a result of a lower (P < .05) cost of fabrication and increased value of retail cuts than the thin, boneless cutting style. In general, boneless pork cutting methods were more profitable than bone-in cutting methods regardless of subprimal.

  18. AN OPTIMIZED 64X64 POINT TWO-DIMENSIONAL FAST FOURIER TRANSFORM

    NASA Technical Reports Server (NTRS)

    Miko, J.

    1994-01-01

    Scientists at Goddard have developed an efficient and powerful program-- An Optimized 64x64 Point Two-Dimensional Fast Fourier Transform-- which combines the performance of real and complex valued one-dimensional Fast Fourier Transforms (FFT's) to execute a two-dimensional FFT and its power spectrum coefficients. These coefficients can be used in many applications, including spectrum analysis, convolution, digital filtering, image processing, and data compression. The program's efficiency results from its technique of expanding all arithmetic operations within one 64-point FFT; its high processing rate results from its operation on a high-speed digital signal processor. For non-real-time analysis, the program requires as input an ASCII data file of 64x64 (4096) real valued data points. As output, this analysis produces an ASCII data file of 64x64 power spectrum coefficients. To generate these coefficients, the program employs a row-column decomposition technique. First, it performs a radix-4 one-dimensional FFT on each row of input, producing complex valued results. Then, it performs a one-dimensional FFT on each column of these results to produce complex valued two-dimensional FFT results. Finally, the program sums the squares of the real and imaginary values to generate the power spectrum coefficients. The program requires a Banshee accelerator board with 128K bytes of memory from Atlanta Signal Processors (404/892-7265) installed on an IBM PC/AT compatible computer (DOS ver. 3.0 or higher) with at least one 16-bit expansion slot. For real-time operation, an ASPI daughter board is also needed. The real-time configuration reads 16-bit integer input data directly into the accelerator board, operating on 64x64 point frames of data. The program's memory management also allows accumulation of the coefficient results. The real-time processing rate to calculate and accumulate the 64x64 power spectrum output coefficients is less than 17.0 mSec. Documentation is included in the price of the program. Source code is written in C, 8086 Assembly, and Texas Instruments TMS320C30 Assembly Languages. This program is available on a 5.25 inch 360K MS-DOS format diskette. IBM and IBM PC are registered trademarks of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation.

  19. Electronic method for autofluorography of macromolecules on two-D matrices. [Patent application

    DOEpatents

    Davidson, J.B.; Case, A.L.

    1981-12-30

    A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100 to 1000 times.

  20. Seismicity of the Bering Glacier Region: Inferences from Relocations Using Data from STEEP

    NASA Astrophysics Data System (ADS)

    Panessa, A. L.; Pavlis, G. L.; Hansen, R. A.; Ruppert, N.

    2008-12-01

    We relocated earthquakes recorded from 1990 to 2007 in the area of the Bering Glacier in southeastern Alaska to test a hypothesis that faults in this area are linked to glaciers. We used waveform correlation to improve arrival time measurements for data from all broadband channels including all the data from the STEEP experiment. We used a novel form of correlation based on interactive array processing of common receiver gathers linked to a three-dimensional grid of control points. This procedure produced 8556 gathers that we processed interactively to produce improved arrival time estimates. The interactive procedure allowed us to select which events in each gather were sufficiently similar to warrant correlation. Redundancy in the result was resolved in a secondary correlation that aligned event stacks of the same station-event pair associated with multiple control points. This procedure yielded only 2240 waveforms that correlated and modified only a total of 524 arrivals in a total database of 12263 arrivals. The correlation procedure changed arrival times on 145 of 509 events in this database. Events with arrivals constrained by correlation were not clustered but were randomly distributed throughout the study area. We used a version of the Progressive Multiple Event Location (PMEL) that analyzed data at each control point to invert for relative locations and a set of path anomalies for each control point. We applied the PMEL procedure with different velocity models and constraints and compared the results to a HypoDD solution produced from the original arrival time data. The relocations are all significant improvements from the standard single-event, catalog locations. The relocations suggest the seismicity in this region is mostly linked to fold and thrust deformation in the Yakatat block. There is a suggestion of a north-dipping trend to much of the seismicity, but the dominant trend is a fairly diffuse cloud of events largely confined to the Yakatat block south of the Bagley Icefield. This is consistent with the recently published tectonic model by Berger et al. (2008).

  1. 56Fe particle exposure results in a long-lasting increase in a cellular index of genomic instability and transiently suppresses adult hippocampal neurogenesis in vivo

    NASA Astrophysics Data System (ADS)

    DeCarolis, Nathan A.; Rivera, Phillip D.; Ahn, Francisca; Amaral, Wellington Z.; LeBlanc, Junie A.; Malhotra, Shveta; Shih, Hung-Ying; Petrik, David; Melvin, Neal R.; Chen, Benjamin P. C.; Eisch, Amelia J.

    2014-07-01

    The high-LET HZE particles from galactic cosmic radiation pose tremendous health risks to astronauts, as they may incur sub-threshold brain injury or maladaptations that may lead to cognitive impairment. The health effects of HZE particles are difficult to predict and unfeasible to prevent. This underscores the importance of estimating radiation risks to the central nervous system as a whole as well as to specific brain regions like the hippocampus, which is central to learning and memory. Given that neurogenesis in the hippocampus has been linked to learning and memory, we investigated the response and recovery of neurogenesis and neural stem cells in the adult mouse hippocampal dentate gyrus after HZE particle exposure using two nestin transgenic reporter mouse lines to label and track radial glia stem cells (Nestin-GFP and Nestin-CreERT2/R26R:YFP mice, respectively). Mice were subjected to 56Fe particle exposure (0 or 1 Gy, at either 300 or 1000 MeV/n) and brains were harvested at early (24 h), intermediate (7 d), and/or long time points (2-3 mo) post-irradiation. 56Fe particle exposure resulted in a robust increase in 53BP1+ foci at both the intermediate and long time points post-irradiation, suggesting long-term genomic instability in the brain. However, 56Fe particle exposure only produced a transient decrease in immature neuron number at the intermediate time point, with no significant decrease at the long time point post-irradiation. 56Fe particle exposure similarly produced a transient decrease in dividing progenitors, with fewer progenitors labeled at the early time point but equal number labeled at the intermediate time point, suggesting a recovery of neurogenesis. Notably, 56Fe particle exposure did not change the total number of nestin-expressing neural stem cells. These results highlight that despite the persistence of an index of genomic instability, 56Fe particle-induced deficits in adult hippocampal neurogenesis may be transient. These data support the regenerative capacity of the adult SGZ after HZE particle exposure and encourage additional inquiry into the relationship between radial glia stem cells and cognitive function after HZE particle exposure.

  2. Simulation and analysis of chemical release in the ionosphere

    NASA Astrophysics Data System (ADS)

    Gao, Jing-Fan; Guo, Li-Xin; Xu, Zheng-Wen; Zhao, Hai-Sheng; Feng, Jie

    2018-05-01

    Ionospheric inhomogeneous plasma produced by single point chemical release has simple space-time structure, and cannot impact radio wave frequencies higher than Very High Frequency (VHF) band. In order to produce more complicated ionospheric plasma perturbation structure and trigger instabilities phenomena, multiple-point chemical release scheme is presented in this paper. The effects of chemical release on low latitude ionospheric plasma are estimated by linear instability growth rate theory that high growth rate represents high irregularities, ionospheric scintillation occurrence probability and high scintillation intension in scintillation duration. The amplitude scintillations and the phase scintillations of 150 MHz, 400 MHz, and 1000 MHz are calculated based on the theory of multiple phase screen (MPS), when they propagate through the disturbed area.

  3. Development of a short version of the modified Yale Preoperative Anxiety Scale.

    PubMed

    Jenkins, Brooke N; Fortier, Michelle A; Kaplan, Sherrie H; Mayes, Linda C; Kain, Zeev N

    2014-09-01

    The modified Yale Preoperative Anxiety Scale (mYPAS) is the current "criterion standard" for assessing child anxiety during induction of anesthesia and has been used in >100 studies. This observational instrument covers 5 items and is typically administered at 4 perioperative time points. Application of this complex instrument in busy operating room (OR) settings, however, presents a challenge. In this investigation, we examined whether the instrument could be modified and made easier to use in OR settings. This study used qualitative methods, principal component analyses, Cronbach αs, and effect sizes to create the mYPAS-Short Form (mYPAS-SF) and reduce time points of assessment. Data were obtained from multiple patients (N = 3798; Mage = 5.63) who were recruited in previous investigations using the mYPAS over the past 15 years. After qualitative analysis, the "use of parent" item was eliminated due to content overlap with other items. The reduced item set accounted for 82% or more of the variance in child anxiety and produced the Cronbach α of at least 0.92. To reduce the number of time points of assessment, a minimum Cohen d effect size criterion of 0.48 change in mYPAS score across time points was used. This led to eliminating the walk to the OR and entrance to the OR time points. Reducing the mYPAS to 4 items, creating the mYPAS-SF that can be administered at 2 time points, retained the accuracy of the measure while allowing the instrument to be more easily used in clinical research settings.

  4. Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.

    PubMed

    Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.

  5. Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

    PubMed Central

    Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321

  6. Simple reaction time to the onset of time-varying sounds.

    PubMed

    Schlittenlacher, Josef; Ellermeier, Wolfgang

    2015-10-01

    Although auditory simple reaction time (RT) is usually defined as the time elapsing between the onset of a stimulus and a recorded reaction, a sound cannot be specified by a single point in time. Therefore, the present work investigates how the period of time immediately after onset affects RT. By varying the stimulus duration between 10 and 500 msec, this critical duration was determined to fall between 32 and 40 milliseconds for a 1-kHz pure tone at 70 dB SPL. In a second experiment, the role of the buildup was further investigated by varying the rise time and its shape. The increment in RT for extending the rise time by a factor of ten was about 7 to 8 msec. There was no statistically significant difference in RT between a Gaussian and linear rise shape. A third experiment varied the modulation frequency and point of onset of amplitude-modulated tones, producing onsets at different initial levels with differently rapid increase or decrease immediately afterwards. The results of all three experiments results were explained very well by a straightforward extension of the parallel grains model (Miller and Ulrich Cogn. Psychol. 46, 101-151, 2003), a probabilistic race model employing many parallel channels. The extension of the model to time-varying sounds made the activation of such a grain depend on intensity as a function of time rather than a constant level. A second approach by mechanisms known from loudness produced less accurate predictions.

  7. MR imaging of ore for heap bioleaching studies using pure phase encode acquisition methods

    NASA Astrophysics Data System (ADS)

    Fagan, Marijke A.; Sederman, Andrew J.; Johns, Michael L.

    2012-03-01

    Various MRI techniques were considered with respect to imaging of aqueous flow fields in low grade copper ore. Spin echo frequency encoded techniques were shown to produce unacceptable image distortions which led to pure phase encoded techniques being considered. Single point imaging multiple point acquisition (SPI-MPA) and spin echo single point imaging (SESPI) techniques were applied. By direct comparison with X-ray tomographic images, both techniques were found to be able to produce distortion-free images of the ore packings at 2 T. The signal to noise ratios (SNRs) of the SESPI images were found to be superior to SPI-MPA for equal total acquisition times; this was explained based on NMR relaxation measurements. SESPI was also found to produce suitable images for a range of particles sizes, whereas SPI-MPA SNR deteriorated markedly as particles size was reduced. Comparisons on a 4.7 T magnet showed significant signal loss from the SPI-MPA images, the effect of which was accentuated in the case of unsaturated flowing systems. Hence it was concluded that SESPI was the most robust imaging method for the study of copper ore heap leaching hydrology.

  8. Short-term physiological responses of wild and hatchery-produced red drum during angling

    USGS Publications Warehouse

    Gallman, E.A.; Isely, J.J.; Tomasso, J.R.; Smith, T.I.J.

    1999-01-01

    Serum cortisol concentrations, plasma glucose concentrations, plasma lactate concentrations, and plasma osmolalities increased in red drum Sciaenops ocellatus (26.0-65.5 cm total length) during angling in estuarine waters (17-33 g/L salinity, 21-31??C). Angling time varied from as fast as possible (10 s) to the point when fish ceased resisting (up to 350 s). The increases in the physiological characteristics were similar in wild and hatchery-produced fish. This study indicates that hatchery-produced red drum may be used in catch-and-release studies to simulate the responses of wild fish.

  9. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  10. Method of producing nano-scaled graphene and inorganic platelets and their nanocomposites

    DOEpatents

    Jang, Bor Z [Centerville, OH; Zhamu, Aruna [Centerville, OH

    2011-02-22

    Disclosed is a method of exfoliating a layered material (e.g., graphite and graphite oxide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm, and often between 0.34 nm and 1.02 nm. The method comprises: (a) subjecting the layered material in a powder form to a halogen vapor at a first temperature above the melting point or sublimation point of the halogen at a sufficient vapor pressure and for a duration of time sufficient to cause the halogen molecules to penetrate an interlayer space of the layered material, forming a stable halogen-intercalated compound; and (b) heating the halogen-intercalated compound at a second temperature above the boiling point of the halogen, allowing halogen atoms or molecules residing in the interlayer space to exfoliate the layered material to produce the platelets. Alternatively, rather than heating, step (a) is followed by a step of dispersing the halogen-intercalated compound in a liquid medium which is subjected to ultrasonication for exfoliating the halogen-intercalated compound to produce the platelets, which are dispersed in the liquid medium. The halogen can be readily captured and re-used, thereby significantly reducing the impact of halogen to the environment. The method can further include a step of dispersing the platelets in a polymer or monomer solution or suspension as a precursor step to nanocomposite fabrication.

  11. Method of producing nano-scaled graphene and inorganic platelets and their nanocomposites

    DOEpatents

    Jang, Bor Z [Centerville, OH; Zhamu, Aruna [Centerville, OH

    2012-02-14

    Disclosed is a method of exfoliating a layered material (e.g., graphite and graphite oxide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm, and often between 0.34 nm and 1.02 nm. The method comprises: (a) subjecting the layered material in a powder form to a halogen vapor at a first temperature above the melting point or sublimation point of the halogen at a sufficient vapor pressure and for a duration of time sufficient to cause the halogen molecules to penetrate an interlayer space of the layered material, forming a stable halogen-intercalated compound; and (b) heating the halogen-intercalated compound at a second temperature above the boiling point of the halogen, allowing halogen atoms or molecules residing in the interlayer space to exfoliate the layered material to produce the platelets. Alternatively, rather than heating, step (a) is followed by a step of dispersing the halogen-intercalated compound in a liquid medium which is subjected to ultrasonication for exfoliating the halogen-intercalated compound to produce the platelets, which are dispersed in the liquid medium. The halogen can be readily captured and re-used, thereby significantly reducing the impact of halogen to the environment. The method can further include a step of dispersing the platelets in a polymer or monomer solution or suspension as a precursor step to nanocomposite fabrication.

  12. Comparison of polyacrylamide and agarose gel thin-layer isoelectric focusing for the characterization of beta-lactamases.

    PubMed

    Vecoli, C; Prevost, F E; Ververis, J J; Medeiros, A A; O'Leary, G P

    1983-08-01

    Plasmid-mediated beta-lactamases from strains of Escherichia coli and Pseudomonas aeruginosa were separated by isoelectric focusing on a 0.8-mm thin-layer agarose gel with a pH gradient of 3.5 to 9.5. Their banding patterns and isoelectric points were compared with those obtained with a 2.0-mm polyacrylamide gel as the support medium. The agarose method produced banding patterns and isoelectric points which corresponded to the polyacrylamide gel data for most samples. Differences were observed for HMS-1 and PSE-1 beta-lactamases. The HMS-1 sample produced two highly resolvable enzyme bands in agarose gels rather than the single faint enzyme band observed on polyacrylamide gels. The PSE-1 sample showed an isoelectric point shift of 0.2 pH unit between polyacrylamide and agarose gel (pI 5.7 and 5.5, respectively). The short focusing time, lack of toxic hazard, and ease of formulation make agarose a practical medium for the characterization of beta-lactamases.

  13. Comparison of polyacrylamide and agarose gel thin-layer isoelectric focusing for the characterization of beta-lactamases.

    PubMed Central

    Vecoli, C; Prevost, F E; Ververis, J J; Medeiros, A A; O'Leary, G P

    1983-01-01

    Plasmid-mediated beta-lactamases from strains of Escherichia coli and Pseudomonas aeruginosa were separated by isoelectric focusing on a 0.8-mm thin-layer agarose gel with a pH gradient of 3.5 to 9.5. Their banding patterns and isoelectric points were compared with those obtained with a 2.0-mm polyacrylamide gel as the support medium. The agarose method produced banding patterns and isoelectric points which corresponded to the polyacrylamide gel data for most samples. Differences were observed for HMS-1 and PSE-1 beta-lactamases. The HMS-1 sample produced two highly resolvable enzyme bands in agarose gels rather than the single faint enzyme band observed on polyacrylamide gels. The PSE-1 sample showed an isoelectric point shift of 0.2 pH unit between polyacrylamide and agarose gel (pI 5.7 and 5.5, respectively). The short focusing time, lack of toxic hazard, and ease of formulation make agarose a practical medium for the characterization of beta-lactamases. Images PMID:6605714

  14. Closed Loop solar array-ion thruster system with power control circuitry

    NASA Technical Reports Server (NTRS)

    Gruber, R. P. (Inventor)

    1979-01-01

    A power control circuit connected between a solar array and an ion thruster receives voltage and current signals from the solar array. The control circuit multiplies the voltage and current signals together to produce a power signal which is differentiated with respect to time. The differentiator output is detected by a zero crossing detector and, after suitable shaping, the detector output is phase compared with a clock in a phase demodulator. An integrator receives no output from the phase demodulator when the operating point is at the maximum power but is driven toward the maximum power point for non-optimum operation. A ramp generator provides minor variations in the beam current reference signal produced by the integrator in order to obtain the first derivative of power.

  15. Genetic algorithms applied to the scheduling of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Sponsler, Jeffrey L.

    1989-01-01

    A prototype system employing a genetic algorithm (GA) has been developed to support the scheduling of the Hubble Space Telescope. A non-standard knowledge structure is used and appropriate genetic operators have been created. Several different crossover styles (random point selection, evolving points, and smart point selection) are tested and the best GA is compared with a neural network (NN) based optimizer. The smart crossover operator produces the best results and the GA system is able to evolve complete schedules using it. The GA is not as time-efficient as the NN system and the NN solutions tend to be better.

  16. Liquid crystal devices especially for use in liquid crystal point diffraction interferometer systems

    NASA Technical Reports Server (NTRS)

    Marshall, Kenneth L. (Inventor)

    2009-01-01

    Liquid crystal point diffraction interferometer (LCPDI) systems that can provide real-time, phase-shifting interferograms that are useful in the characterization of static optical properties (wavefront aberrations, lensing, or wedge) in optical elements or dynamic, time-resolved events (temperature fluctuations and gradients, motion) in physical systems use improved LCPDI cells that employ a "structured" substrate or substrates in which the structural features are produced by thin film deposition or photo resist processing to provide a diffractive element that is an integral part of the cell substrate(s). The LC material used in the device may be doped with a "contrast-compensated" mixture of positive and negative dichroic dyes.

  17. Liquid crystal devices especially for use in liquid crystal point diffraction interferometer systems

    DOEpatents

    Marshall, Kenneth L [Rochester, NY

    2009-02-17

    Liquid crystal point diffraction interferometer (LCPDI) systems that can provide real-time, phase-shifting interferograms that are useful in the characterization of static optical properties (wavefront aberrations, lensing, or wedge) in optical elements or dynamic, time-resolved events (temperature fluctuations and gradients, motion) in physical systems use improved LCPDI cells that employ a "structured" substrate or substrates in which the structural features are produced by thin film deposition or photo resist processing to provide a diffractive element that is an integral part of the cell substrate(s). The LC material used in the device may be doped with a "contrast-compensated" mixture of positive and negative dichroic dyes.

  18. Study of Laser Reflectivity on Skin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oidor-Garcia, J. J. J.; Trevino-Palacios, C. G.

    2008-08-11

    The response to the light on the skin can be manifested as temperature increase or creation of biochemical byproducts, in which further studies are required to asset the light effect. This response changes the average response over time and can produce discrepancies between similar studies. In this work we present a Low Level Laser Therapy (LLLT) study with feedback. We study the time response reflectivity of a 980 nm laser diode of 25 mW modulated at frequencies close to 40 kHz and detect the reflected light on a silicon photodiode, finding no direct correlation between different test points or individuals,more » while finding reproducible responses within the same individual and test point.« less

  19. Selected Papers on Low-Energy Antiprotons and Possible Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Robert

    1998-09-19

    The only realistic means by which to create a facility at Fermilab to produce large amounts of low energy antiprotons is to use resources which already exist. There is simply too little money and manpower at this point in time to generate new accelerators on a time scale before the turn of the century. Therefore, innovation is required to modify existing equipment to provide the services required by experimenters.

  20. Finding Regions of Interest on Toroidal Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Sinha, Rishi R; Jones, Chad

    2011-02-09

    Fusion promises to provide clean and safe energy, and a considerable amount of research effort is underway to turn this aspiration intoreality. This work focuses on a building block for analyzing data produced from the simulation of microturbulence in magnetic confinementfusion devices: the task of efficiently extracting regions of interest. Like many other simulations where a large amount of data are produced,the careful study of ``interesting'' parts of the data is critical to gain understanding. In this paper, we present an efficient approach forfinding these regions of interest. Our approach takes full advantage of the underlying mesh structure in magneticmore » coordinates to produce acompact representation of the mesh points inside the regions and an efficient connected component labeling algorithm for constructingregions from points. This approach scales linearly with the surface area of the regions of interest instead of the volume as shown with bothcomputational complexity analysis and experimental measurements. Furthermore, this new approach is 100s of times faster than a recentlypublished method based on Cartesian coordinates.« less

  1. Controlled chemical stabilization of polyvinyl precursor fiber, and high strength carbon fiber produced therefrom

    DOEpatents

    Naskar, Amit K.

    2016-12-27

    Method for the preparation of carbon fiber, which comprises: (i) immersing functionalized polyvinyl precursor fiber into a liquid solution having a boiling point of at least 60.degree. C.; (ii) heating the liquid solution to a first temperature of at least 25.degree. C. at which the functionalized precursor fiber engages in an elimination-addition equilibrium while a tension of at least 0.1 MPa is applied to the fiber; (iii) gradually raising the first temperature to a final temperature that is at least 20.degree. C. above the first temperature and up to the boiling point of the liquid solution for sufficient time to convert the functionalized precursor fiber to a pre-carbonized fiber; and (iv) subjecting the pre-carbonized fiber produced according to step (iii) to high temperature carbonization conditions to produce the final carbon fiber. Articles and devices containing the fibers, including woven and non-woven mats or paper forms of the fibers, are also described.

  2. Evaluating lidar point densities for effective estimation of aboveground biomass

    USGS Publications Warehouse

    Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason M.; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.

  3. [Combination of acupuncture, cupping and medicine for treatment of fibromyalgia syndrome: a multi-central randomized controlled trial].

    PubMed

    Jang, Zhen-Ya; Li, Chang-Du; Qiu, Ling; Guo, Jun-Hua; He, Ling-Na; Yue, Yang; Li, Fang-Ze; Qin, Wen-Yi

    2010-04-01

    To evaluate the clinical effect of combination of acupuncture, cupping and medicine for treatment of fibromyalgia syndrome. By using multi-central randomized controlled method, 186 cases were randomly divided into an acupuncture combined with cupping and western medicine group (group A), an acupuncture combined with cupping group (group B) and a western medicine group (group C) and treated continuously for 4 weeks. The treatment of acupuncture combined with cupping was produced by acupuncture at five mental points and moving cupping on the Hechelu of the back, once evrey other day, thrice each week, and the western medicine therapy by oral administration of Amitriptyline, once each day. The scores of McGill Pain Questionnaire (MPQ), the amount of tenderness point and the time of producing effect were compared and the therapeutic effects were assessed with the Hamilton Depression Scale (HAMD). The cured and markedly effective rate was 65.0% (39/60) in the group A, which was superior to 15.9% (10/63) in the group B and 16.1% (9/56) in the group C (both P < 0.001). After treatment, the scores of MPQ and HAMD and the amount of tenderness point all decreased in the three groups, group A being significantly better than group B and group C, and the time of producing effect in the group A was more earlier than those in the group B and the group C. The therapeutic effect of combination of acupuncture, cupping and medicine on fibromyalgia syndrome is superior to that of the simple acupuncture combined with cupping or the simple medicine.

  4. Development of Grammatical Accuracy in English-Speaking Children With Cochlear Implants: A Longitudinal Study

    PubMed Central

    Spencer, Linda J.

    2017-01-01

    Purpose We sought to evaluate the development of grammatical accuracy in English-speaking children with cochlear implants (CIs) over a 3-year span. Method Ten children who received CIs before age 30 months participated in this study at 3, 4, and 5 years postimplantation. For the purpose of comparison, 10 children each at ages 3, 4, and 5 years with typical hearing were included as well. All children participated in a story-retell task. We computed percent grammatical communication units (PGCU) in the task. Results Children with CIs showed significant improvement in PGCU over the 3-year span. However, they produced lower PGCU than children with typical hearing who had matched hearing age at 4 and 5 years postimplantation. At the individual level, some children with CIs were able to produce PGCU comparable to children with typical hearing as early as 3 years after implantation. Better speech-perception skills at earlier time points were associated with higher PGCU at later time points. Moreover, children with and without CIs showed similar rankings in the types of grammatical errors. Conclusion Despite having auditory-perceptual and information-processing constraints, children who received CIs before age 30 months were able to produce grammatical sentences, albeit with a delayed pattern. PMID:28384729

  5. Ocular stability and set-point adaptation

    PubMed Central

    Jareonsettasin, P.; Leigh, R. J.

    2017-01-01

    A fundamental challenge to the brain is how to prevent intrusive movements when quiet is needed. Unwanted limb movements such as tremor impair fine motor control and unwanted eye drifts such as nystagmus impair vision. A stable platform is also necessary to launch accurate movements. Accordingly, nature has designed control systems with agonist (excitation) and antagonist (inhibition) muscle pairs functioning in push–pull, around a steady level of balanced tonic activity, the set-point. Sensory information can be organized similarly, as in the vestibulo-ocular reflex, which generates eye movements that compensate for head movements. The semicircular canals, working in coplanar pairs, one in each labyrinth, are reciprocally excited and inhibited as they transduce head rotations. The relative change in activity is relayed to the vestibular nuclei, which operate around a set-point of stable balanced activity. When a pathological imbalance occurs, producing unwanted nystagmus without head movement, an adaptive mechanism restores the proper set-point and eliminates the nystagmus. Here we used 90 min of continuous 7 T magnetic field labyrinthine stimulation (MVS) in normal humans to produce sustained nystagmus simulating vestibular imbalance. We identified multiple time-scale processes towards a new zero set-point showing that MVS is an excellent paradigm to investigate the neurobiology of set-point adaptation. This article is part of the themed issue ‘Movement suppression: brain mechanisms for stopping and stillness’. PMID:28242733

  6. Impact of US and Canadian precursor regulation on methamphetamine purity in the United States.

    PubMed

    Cunningham, James K; Liu, Lon-Mu; Callaghan, Russell

    2009-03-01

    Reducing drug purity is a major, but largely unstudied, goal of drug suppression. This study examines whether US methamphetamine purity was impacted by the suppression policy of US and Canadian precursor chemical regulation. Autoregressive integrated moving average (ARIMA)-intervention time-series analysis. Continental United States and Hawaii (1985-May 2005). Interventions US federal regulations targeting precursors, ephedrine and pseudoephedrine, in forms used by large-scale producers were implemented in November 1989, August 1995 and October 1997. US regulations targeting precursors in forms used by small-scale producers (e.g. over-the-counter medications) were implemented in October 1996 and October 2001. Canada implemented federal precursor regulations in January 2003 and July 2003 and an essential chemical (e.g. acetone) regulation in January 2004. Monthly median methamphetamine purity series. US regulations targeting large-scale producers were associated with purity declines of 16-67 points; those targeting small-scale producers had little or no impact. Canada's precursor regulations were associated with purity increases of 13-15 points, while its essential chemical regulation was associated with a 13-point decrease. Hawaii's purity was consistently high, and appeared to vary little with the 1990s/2000s regulations. US precursor regulations targeting large-scale producers were associated with substantial decreases in continental US methamphetamine purity, while regulations targeting over-the-counter medications had little or no impact. Canada's essential chemical regulation was also associated with a decrease in continental US purity. However, Canada's precursor regulations were associated with purity increases: these regulations may have impacted primarily producers of lower-quality methamphetamine, leaving higher-purity methamphetamine on the market by default. Hawaii's well-known preference for 'ice' (high-purity methamphetamine) may have helped to constrain purity there to a high, attenuated range, possibly limiting its sensitivity to precursor regulation.

  7. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications

    PubMed Central

    Moussa, Adel; El-Sheimy, Naser; Habib, Ayman

    2017-01-01

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847

  8. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    PubMed

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  9. Estimating snow depth in real time using unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Niedzielski, Tomasz; Mizinski, Bartlomiej; Witek, Matylda; Spallek, Waldemar; Szymanowski, Mariusz

    2016-04-01

    In frame of the project no. LIDER/012/223/L-5/13/NCBR/2014, financed by the National Centre for Research and Development of Poland, we elaborated a fully automated approach for estimating snow depth in real time in the field. The procedure uses oblique aerial photographs taken by the unmanned aerial vehicle (UAV). The geotagged images of snow-covered terrain are processed by the Structure-from-Motion (SfM) method which is used to produce a non-georeferenced dense point cloud. The workflow includes the enhanced RunSFM procedure (keypoint detection using the scale-invariant feature transform known as SIFT, image matching, bundling using the Bundler, executing the multi-view stereo PMVS and CMVS2 software) which is preceded by multicore image resizing. The dense point cloud is subsequently automatically georeferenced using the GRASS software, and the ground control points are borrowed from positions of image centres acquired from the UAV-mounted GPS receiver. Finally, the digital surface model (DSM) is produced which - to improve the accuracy of georeferencing - is shifted using a vector obtained through precise geodetic GPS observation of a single ground control point (GCP) placed on the Laboratory for Unmanned Observations of Earth (mobile lab established at the University of Wroclaw, Poland). The DSM includes snow cover and its difference with the corresponding snow-free DSM or digital terrain model (DTM), following the concept of the digital elevation model of differences (DOD), produces a map of snow depth. Since the final result depends on the snow-free model, two experiments are carried out. Firstly, we show the performance of the entire procedure when the snow-free model reveals a very high resolution (3 cm/px) and is produced using the UAV-taken photographs and the precise GCPs measured by the geodetic GPS receiver. Secondly, we perform a similar exercise but the 1-metre resolution light detection and ranging (LIDAR) DSM or DTM serves as the snow-free model. Thus, the main objective of the paper is to present the performance of the new procedure for estimating snow depth and to compare the two experiments.

  10. The timing of control signals underlying fast point-to-point arm movements.

    PubMed

    Ghafouri, M; Feldman, A G

    2001-04-01

    It is known that proprioceptive feedback induces muscle activation when the facilitation of appropriate motoneurons exceeds their threshold. In the suprathreshold range, the muscle-reflex system produces torques depending on the position and velocity of the joint segment(s) that the muscle spans. The static component of the torque-position relationship is referred to as the invariant characteristic (IC). According to the equilibrium-point (EP) hypothesis, control systems produce movements by changing the activation thresholds and thus shifting the IC of the appropriate muscles in joint space. This control process upsets the balance between muscle and external torques at the initial limb configuration and, to regain the balance, the limb is forced to establish a new configuration or, if the movement is prevented, a new level of static torques. Taken together, the joint angles and the muscle torques generated at an equilibrium configuration define a single variable called the EP. Thus by shifting the IC, control systems reset the EP. Muscle activation and movement emerge following the EP resetting because of the natural physical tendency of the system to reach equilibrium. Empirical and simulation studies support the notion that the control IC shifts and the resulting EP shifts underlying fast point-to-point arm movements are gradual rather than step-like. However, controversies exist about the duration of these shifts. Some studies suggest that the IC shifts cease with the movement offset. Other studies propose that the IC shifts end early in comparison to the movement duration (approximately, at peak velocity). The purpose of this study was to evaluate the duration of the IC shifts underlying fast point-to-point arm movements. Subjects made fast (hand peak velocity about 1.3 m/s) planar arm movements toward different targets while grasping a handle. Hand forces applied to the handle and shoulder/elbow torques were, respectively, measured from a force sensor placed on the handle, or computed with equations of motion. In some trials, an electromagnetic brake prevented movements. In such movements, the hand force and joint torques reached a steady state after a time that was much smaller than the movement duration in unobstructed movements and was approximately equal to the time to peak velocity (mean difference < 80 ms). In an additional experiment, subjects were instructed to rapidly initiate corrections of the pushing force in response to movement arrest. They were able to initiate such corrections only when the joint torques and the pushing force had practically reached a steady state. The latency of correction onset was, however, smaller than the duration of unobstructed movements. We concluded that during the time at which the steady state torques were reached, the control pattern of IC shifts remained the same despite the movement block. Thereby the duration of these shifts did not exceed the time of reaching the steady state torques. Our findings are consistent with the hypothesis that, in unobstructed movements, the IC shifts and resulting shifts in the EP end approximately at peak velocity. In other words, during the latter part of the movement, the control signals responsible for the equilibrium shift remained constant, and the movement was driven by the arm inertial, viscous and elastic forces produced by the muscle-reflex system. Fast movements may thus be completed without continuous control guidance. As a consequence, central corrections and sequential commands may be issued rapidly, without waiting for the end of kinematic responses to each command, which may be important for many motor behaviours including typing, piano playing and speech. Our study also illustrates that the timing of the control signals may be substantially different from that of the resulting motor output and that the same control pattern may produce different motor outputs depending on external conditions.

  11. Time-dependent observables in heavy ion collisions. Part II. In search of pressure isotropization in the φ 4 theory

    NASA Astrophysics Data System (ADS)

    Kovchegov, Yuri V.; Wu, Bin

    2018-03-01

    To understand the dynamics of thermalization in heavy ion collisions in the perturbative framework it is essential to first find corrections to the free-streaming classical gluon fields of the McLerran-Venugopalan model. The corrections that lead to deviations from free streaming (and that dominate at late proper time) would provide evidence for the onset of isotropization (and, possibly, thermalization) of the produced medium. To find such corrections we calculate the late-time two-point Green function and the energy-momentum tensor due to a single 2 → 2 scattering process involving two classical fields. To make the calculation tractable we employ the scalar φ 4 theory instead of QCD. We compare our exact diagrammatic results for these quantities to those in kinetic theory and find disagreement between the two. The disagreement is in the dependence on the proper time τ and, for the case of the two-point function, is also in the dependence on the space-time rapidity η: the exact diagrammatic calculation is, in fact, consistent with the free streaming scenario. Kinetic theory predicts a build-up of longitudinal pressure, which, however, is not observed in the exact calculation. We conclude that we find no evidence for the beginning of the transition from the free-streaming classical fields to the kinetic theory description of the produced matter after a single 2 → 2 rescattering.

  12. Rapid production of Candida albicans chlamydospores in liquid media under various incubation conditions.

    PubMed

    Alicia, Zavalza-Stiker; Blanca, Ortiz-Saldivar; Mariana, García-Hernández; Magdalena, Castillo-Casanova; Alexandro, Bonifaz

    2006-01-01

    The production of chlamydospores is a diagnostic tool used to identify Candida albicans; these structures also represent a model for morphogenetic research. The time required to produce them with standard methods is 48-72 hours in rice meal agar and tensoactive agents. This time can be shorted using liquid media such as cornmeal broth (CMB) and dairy supplements. Five media were tested: CMB plus 1% Tween-80, CMB plus 5% milk, CMB plus 5% milk serum, milk serum, and milk serum plus 1% Tween-80, under different incubation conditions: at 28 degrees C and 37 degrees C in a metabolic bath stirring at 150 rpm, and at 28 degrees C in a culture stove. The reading time points were established at 8 and 16 hours. The best results were obtained at 16 hours with CMB plus 5% milk under incubation at 28 degrees C and stirring at 150 rpm. The next most efficient methods were CMB plus 5% milk serum and CMB plus 1% Tween-80, under the same incubation conditions. The other media were ineffective in producing chlamydospores. The absence of stirring at 28 degrees C prevented the formation of chlamydospores within the set time points, and incubation at 37 degrees C decreased their production. This paper reports that the time to form C. albicans chlamydospores can be reduced.

  13. QUANTUM MECHANICS. Quantum squeezing of motion in a mechanical resonator.

    PubMed

    Wollman, E E; Lei, C U; Weinstein, A J; Suh, J; Kronwald, A; Marquardt, F; Clerk, A A; Schwab, K C

    2015-08-28

    According to quantum mechanics, a harmonic oscillator can never be completely at rest. Even in the ground state, its position will always have fluctuations, called the zero-point motion. Although the zero-point fluctuations are unavoidable, they can be manipulated. Using microwave frequency radiation pressure, we have manipulated the thermal fluctuations of a micrometer-scale mechanical resonator to produce a stationary quadrature-squeezed state with a minimum variance of 0.80 times that of the ground state. We also performed phase-sensitive, back-action evading measurements of a thermal state squeezed to 1.09 times the zero-point level. Our results are relevant to the quantum engineering of states of matter at large length scales, the study of decoherence of large quantum systems, and for the realization of ultrasensitive sensing of force and motion. Copyright © 2015, American Association for the Advancement of Science.

  14. The Mean Curvature of the Influence Surface of Wave Equation With Sources on a Moving Surface

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Farris, Mark

    1999-01-01

    The mean curvature of the influence surface of the space-time point (x, t) appears in linear supersonic propeller noise theory and in the Kirchhoff formula for a supersonic surface. Both these problems are governed by the linear wave equation with sources on a moving surface. The influence surface is also called the Sigma - surface in the aeroacoustic literature. This surface is the locus, in a frame fixed to the quiescent medium, of all the points of a radiating surface f(x, t) = 0 whose acoustic signals arrive simultaneously to an observer at position x and at the time t. Mathematically, the Sigma- surface is produced by the intersection of the characteristic conoid of the space-time point (x, t) and the moving surface. In this paper, we derive the expression for the local mean curvature of the Sigma - space of the space-time point for a moving rigid or deformable surface f(x, t) = 0. This expression is a complicated function of the geometric and kinematic parameters of the surface f(x, t) = 0. Using the results of this paper, the solution of the governing wave equation of high speed propeller noise radiation as well as the Kirchhoff formula for a supersonic surface can be written as very compact analytic expression.

  15. Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Lane, David

    1995-01-01

    An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.

  16. Three-dimensional Simulations of Pure Deflagration Models for Thermonuclear Supernovae

    NASA Astrophysics Data System (ADS)

    Long, Min; Jordan, George C., IV; van Rossum, Daniel R.; Diemer, Benedikt; Graziani, Carlo; Kessler, Richard; Meyer, Bradley; Rich, Paul; Lamb, Don Q.

    2014-07-01

    We present a systematic study of the pure deflagration model of Type Ia supernovae (SNe Ia) using three-dimensional, high-resolution, full-star hydrodynamical simulations, nucleosynthetic yields calculated using Lagrangian tracer particles, and light curves calculated using radiation transport. We evaluate the simulations by comparing their predicted light curves with many observed SNe Ia using the SALT2 data-driven model and find that the simulations may correspond to under-luminous SNe Iax. We explore the effects of the initial conditions on our results by varying the number of randomly selected ignition points from 63 to 3500, and the radius of the centered sphere they are confined in from 128 to 384 km. We find that the rate of nuclear burning depends on the number of ignition points at early times, the density of ignition points at intermediate times, and the radius of the confining sphere at late times. The results depend primarily on the number of ignition points, but we do not expect this to be the case in general. The simulations with few ignition points release more nuclear energy E nuc, have larger kinetic energies E K, and produce more 56Ni than those with many ignition points, and differ in the distribution of 56Ni, Si, and C/O in the ejecta. For these reasons, the simulations with few ignition points exhibit higher peak B-band absolute magnitudes M B and light curves that rise and decline more quickly; their M B and light curves resemble those of under-luminous SNe Iax, while those for simulations with many ignition points are not.

  17. First-Order Interfacial Transformations with a Critical Point: Breaking the Symmetry at a Symmetric Tilt Grain Boundary

    NASA Astrophysics Data System (ADS)

    Yang, Shengfeng; Zhou, Naixie; Zheng, Hui; Ong, Shyue Ping; Luo, Jian

    2018-02-01

    First-order interfacial phaselike transformations that break the mirror symmetry of the symmetric ∑5 (210 ) tilt grain boundary (GB) are discovered by combining a modified genetic algorithm with hybrid Monte Carlo and molecular dynamics simulations. Density functional theory calculations confirm this prediction. This first-order coupled structural and adsorption transformation, which produces two variants of asymmetric bilayers, vanishes at an interfacial critical point. A GB complexion (phase) diagram is constructed via semigrand canonical ensemble atomistic simulations for the first time.

  18. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  19. The Technique of Special-Effects Cinematography.

    ERIC Educational Resources Information Center

    Fielding, Raymond

    The author describes the many techniques used to produce cinematic effects that would be too costly, too difficult, too time-consuming, too dangerous, or simply impossible to achieve with conventional photographic techniques. He points out that these techniques are available not only for 35 millimeter work but also to the 16 mm. photographer who…

  20. Gearing up for Fast Grading and Reporting

    ERIC Educational Resources Information Center

    O'Connor, Ken; Jung, Lee Ann; Reeves, Douglas

    2018-01-01

    The authors posit that the traditional grading system involving points and percentages is not the best way to prepare students to be the self-directed, independent learners they need to be. A better system would produce grades that are FAST (fair, accurate, specific, timely). To bring about these changes, school leaders need not create full…

  1. 7 CFR 46.43 - Terms construed.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... be loaded and shipped on a boat scheduled to leave before midnight of the date specified. When used... to determine if the produce shipped complied with the terms of the contract at time of shipment... f.o.b. shipping point, the buyer shall be deemed to have assumed only the lowest all-rail freight...

  2. Using Monte Carlo simulation to examine the economic cost and impact of HLB

    USDA-ARS?s Scientific Manuscript database

    Crop budgets are a useful and integral tool for producers in making sound business decisions. Although, not without shortcomings. Typically, crop and enterprise budgets are static and examine prices at one point in time. In order to assess changing prices, for inputs and or production, it is typi...

  3. Building Intercultural Empathy through Writing: Reflections on Teaching Alternatives to Argumentation

    ERIC Educational Resources Information Center

    Peirce, Karen P.

    2007-01-01

    Writing assignments that focus on nonargumentative discourse can take many forms. Such assignments can prompt students to produce individually constructed writing, or they can be more collaborative in nature. They can focus on traditional formats, following MLA citation guidelines, using Times New Roman 12-point font, maintaining one-inch margins,…

  4. Do Adjusting-Amount and Adjusting-Delay Procedures Produce Equivalent Estimates of Subjective Value in Pigeons?

    ERIC Educational Resources Information Center

    Green, Leonard; Myerson, Joel; Shah, Anuj K.; Estle, Sara J.; Holt, Daniel D.

    2007-01-01

    The current experiment examined whether adjusting-amount and adjusting-delay procedures provide equivalent measures of discounting. Pigeons' discounting on the two procedures was compared using a within-subject yoking technique in which the indifference point (number of pellets or time until reinforcement) obtained with one procedure determined…

  5. Auditory steady-state evoked potentials vs. compound action potentials for the measurement of suppression tuning curves in the sedated dog puppy.

    PubMed

    Markessis, Emily; Poncelet, Luc; Colin, Cécile; Hoonhorst, Ingrid; Collet, Grégory; Deltenre, Paul; Moore, Brian C J

    2010-06-01

    Auditory steady-state evoked potential (ASSEP) tuning curves were compared to compound action potential (CAP) tuning curves, both measured at 2 Hz, using sedated beagle puppies. The effect of two types of masker (narrowband noise and sinusoidal) on the tuning curve parameters was assessed. Whatever the masker type, CAP tuning curve parameters were qualitatively and quantitatively similar to the ASSEP ones, with a similar inter-subject variability, but with a greater incidence of upward tip displacement. Whatever the procedure, sinusoidal maskers produced sharper tuning curves than narrow-band maskers. Although these differences are not likely to have significant implications for clinical work, from a fundamental point of view, their origin requires further investigations. The same amount of time was needed to record a CAP and an ASSEP 13-point tuning curve. The data further validate the ASSEP technique, which has the advantages of having a smaller tendency to produce upward tip shifts than the CAP technique. Moreover, being non invasive, ASSEP tuning curves can be easily repeated over time in the same subject for clinical and research purposes.

  6. Arrests for child pornography production: data at two time points from a national sample of U.S. law enforcement agencies.

    PubMed

    Wolak, Janis; Finkelhor, David; Mitchell, Kimberly J; Jones, Lisa M

    2011-08-01

    This study collected information on arrests for child pornography (CP) production at two points (2000-2001 and 2006) from a national sample of more than 2,500 law enforcement agencies. In addition to providing descriptive data about an understudied crime, the authors examined whether trends in arrests suggested increasing CP production, shifts in victim populations, and challenges to law enforcement. Arrests for CP production more than doubled from an estimated 402 in 2000-2001 to an estimated 859 in 2006. Findings suggest the increase was related to increased law enforcement activity rather than to growth in the population of CP producers. Adolescent victims increased, but there was no increase in the proportion of arrest cases involving very young victims or violent images. Producers distributed images in 23% of arrest cases, a proportion that did not change over time. This suggests that much CP production may be primarily for private use. Proactive law enforcement operations increased, as did other features consistent with a robust law enforcement response.

  7. Identification of innovative potential quality markers in rocket and melon fresh-cut produce.

    PubMed

    Cavaiuolo, Marina; Cocetta, Giacomo; Bulgari, Roberta; Spinardi, Anna; Ferrante, Antonio

    2015-12-01

    Ready-to-eat fresh cut produce are exposed to pre- and postharvest abiotic stresses during the production chain. Our work aimed to identify stress responsive genes as new molecular markers of quality that can be widely applied to leaves and fruits and easily determined at any stage of the production chain. Stress responsive genes associated with quality losses were isolated in rocket and melon fresh-cut produce and their expression levels analyzed by quantitative real time PCR (qRT-PCR) at different time points after harvest at 20 °C and 4 °C. qRT-PCR results were supported by correlation analysis with physiological and biochemical determinations evaluated at the same conditions such as chlorophyll a fluorescence indices, total, reducing sugars, sucrose, ethylene, ascorbic acid, lipid peroxidation and reactive oxygen species. In both species the putative molecular markers increased their expression soon after harvest suggesting a possible use as novel and objective quality markers of fresh-cut produces. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A vectorized algorithm for 3D dynamics of a tethered satellite

    NASA Technical Reports Server (NTRS)

    Wilson, Howard B.

    1989-01-01

    Equations of motion characterizing the three dimensional motion of a tethered satellite during the retrieval phase are studied. The mathematical model involves an arbitrary number of point masses connected by weightless cords. Motion occurs in a gravity gradient field. The formulation presented accounts for general functions describing support point motion, rate of tether retrieval, and arbitrary forces applied to the point masses. The matrix oriented program language MATLAB is used to produce an efficient vectorized formulation for computing natural frequencies and mode shapes for small oscillations about the static equilibrium configuration; and for integrating the nonlinear differential equations governing large amplitude motions. An example of time response pertaining to the skip rope effect is investigated.

  9. Experimental characterization of an ultra-fast Thomson scattering x-ray source with three-dimensional time and frequency-domain analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuba, J; Slaughter, D R; Fittinghoff, D N

    We present a detailed comparison of the measured characteristics of Thomson backscattered x-rays produced at the PLEIADES (Picosecond Laser-Electron Interaction for the Dynamic Evaluation of Structures) facility at Lawrence Livermore National Laboratory to predicted results from a newly developed, fully three-dimensional time and frequency-domain code. Based on the relativistic differential cross section, this code has the capability to calculate time and space dependent spectra of the x-ray photons produced from linear Thomson scattering for both bandwidth-limited and chirped incident laser pulses. Spectral broadening of the scattered x-ray pulse resulting from the incident laser bandwidth, perpendicular wave vector components in themore » laser focus, and the transverse and longitudinal phase space of the electron beam are included. Electron beam energy, energy spread, and transverse phase space measurements of the electron beam at the interaction point are presented, and the corresponding predicted x-ray characteristics are determined. In addition, time-integrated measurements of the x-rays produced from the interaction are presented, and shown to agree well with the simulations.« less

  10. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  11. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  12. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    NASA Astrophysics Data System (ADS)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  13. A Control Model: Interpretation of Fitts' Law

    NASA Technical Reports Server (NTRS)

    Connelly, E. M.

    1984-01-01

    The analytical results for several models are given: a first order model where it is assumed that the hand velocity can be directly controlled, and a second order model where it is assumed that the hand acceleration can be directly controlled. Two different types of control-laws are investigated. One is linear function of the hand error and error rate; the other is the time-optimal control law. Results show that the first and second order models with the linear control-law produce a movement time (MT) function with the exact form of the Fitts' Law. The control-law interpretation implies that the effect of target width on MT must be a result of the vertical motion which elevates the hand from the starting point and drops it on the target at the target edge. The time optimal control law did not produce a movement-time formula simular to Fitt's Law.

  14. The method ADAMONT v1.0 for statistical adjustment of climate projections applicable to energy balance land surface models

    NASA Astrophysics Data System (ADS)

    Verfaillie, Deborah; Déqué, Michel; Morin, Samuel; Lafaysse, Matthieu

    2017-11-01

    We introduce the method ADAMONT v1.0 to adjust and disaggregate daily climate projections from a regional climate model (RCM) using an observational dataset at hourly time resolution. The method uses a refined quantile mapping approach for statistical adjustment and an analogous method for sub-daily disaggregation. The method ultimately produces adjusted hourly time series of temperature, precipitation, wind speed, humidity, and short- and longwave radiation, which can in turn be used to force any energy balance land surface model. While the method is generic and can be employed for any appropriate observation time series, here we focus on the description and evaluation of the method in the French mountainous regions. The observational dataset used here is the SAFRAN meteorological reanalysis, which covers the entire French Alps split into 23 massifs, within which meteorological conditions are provided for several 300 m elevation bands. In order to evaluate the skills of the method itself, it is applied to the ALADIN-Climate v5 RCM using the ERA-Interim reanalysis as boundary conditions, for the time period from 1980 to 2010. Results of the ADAMONT method are compared to the SAFRAN reanalysis itself. Various evaluation criteria are used for temperature and precipitation but also snow depth, which is computed by the SURFEX/ISBA-Crocus model using the meteorological driving data from either the adjusted RCM data or the SAFRAN reanalysis itself. The evaluation addresses in particular the time transferability of the method (using various learning/application time periods), the impact of the RCM grid point selection procedure for each massif/altitude band configuration, and the intervariable consistency of the adjusted meteorological data generated by the method. Results show that the performance of the method is satisfactory, with similar or even better evaluation metrics than alternative methods. However, results for air temperature are generally better than for precipitation. Results in terms of snow depth are satisfactory, which can be viewed as indicating a reasonably good intervariable consistency of the meteorological data produced by the method. In terms of temporal transferability (evaluated over time periods of 15 years only), results depend on the learning period. In terms of RCM grid point selection technique, the use of a complex RCM grid points selection technique, taking into account horizontal but also altitudinal proximity to SAFRAN massif centre points/altitude couples, generally degrades evaluation metrics for high altitudes compared to a simpler grid point selection method based on horizontal distance.

  15. Comparison of morphine and carprofen administered alone or in combination for analgesia in dogs undergoing ovariohysterectomy.

    PubMed

    Dzikiti, T B; Joubert, K E; Venter, L J; Dzikiti, L N

    2006-09-01

    In this study the analgesic efficacy of the pure agonistic opioid morphine and the cyclo-oxygenase type-2-selective carprofen were compared since there is no previous specific comparative study for these two common analgesics. Forty-five bitches undergoing elective ovariohysterectomy were randomly assigned to one of three groups; receiving morphine 0.4 mg/kg bodyweight pre-operatively and 0.2 mg/kg every 4-6 hours thereafter (Morphine group), receiving a once-off carprofen 4 mg/kg injection (Carprofen group) or receiving both morphine and carprofen (MorphCarp group). The dogs were premedicated with acepromazine 0.01 mg/kg and induced with either thiopentone 5-10 mg/kg or propofol 4-6 mg/kg. General anaesthesia was maintained with halothane in oxygen. The degree of pain was assessed over a 24-hour period under blinded conditions using a pain scale modified from the University of Melbourne pain scale and the Glasgow composite pain tool. Physiological parameters such as respiratory rate, pulse rate and body temperature were also assessed over the same time period. There was no significant difference in pain-scores and thus analgesia offered by the three analgesia protocols at any assessment point across the three groups, but there were differences within groups across time points. Baseline total pain-scores were lower than scores at all post-operative points within all three groups. Both morphine and carprofen provided good analgesia without any obvious adverse effects. This study indicates that at the dosages indicated above, carprofen administered on its own produces analgesia equal to that produced by morphine and that the two drugs administered together do not produce better analgesia than either drug administered on its own.

  16. Ice-binding proteins that accumulate on different ice crystal planes produce distinct thermal hysteresis dynamics

    PubMed Central

    Drori, Ran; Celik, Yeliz; Davies, Peter L.; Braslavsky, Ido

    2014-01-01

    Ice-binding proteins that aid the survival of freeze-avoiding, cold-adapted organisms by inhibiting the growth of endogenous ice crystals are called antifreeze proteins (AFPs). The binding of AFPs to ice causes a separation between the melting point and the freezing point of the ice crystal (thermal hysteresis, TH). TH produced by hyperactive AFPs is an order of magnitude higher than that produced by a typical fish AFP. The basis for this difference in activity remains unclear. Here, we have compared the time dependence of TH activity for both hyperactive and moderately active AFPs using a custom-made nanolitre osmometer and a novel microfluidics system. We found that the TH activities of hyperactive AFPs were time-dependent, and that the TH activity of a moderate AFP was almost insensitive to time. Fluorescence microscopy measurement revealed that despite their higher TH activity, hyperactive AFPs from two insects (moth and beetle) took far longer to accumulate on the ice surface than did a moderately active fish AFP. An ice-binding protein from a bacterium that functions as an ice adhesin rather than as an antifreeze had intermediate TH properties. Nevertheless, the accumulation of this ice adhesion protein and the two hyperactive AFPs on the basal plane of ice is distinct and extensive, but not detectable for moderately active AFPs. Basal ice plane binding is the distinguishing feature of antifreeze hyperactivity, which is not strictly needed in fish that require only approximately 1°C of TH. Here, we found a correlation between the accumulation kinetics of the hyperactive AFP at the basal plane and the time sensitivity of the measured TH. PMID:25008081

  17. Sequencing Operations: The Critical Path of Operational Art,

    DTIC Science & Technology

    1987-05-01

    while at the same time covering the withdrawal of the Army Group A forces from the Caucasus.55 In effect , Manstein had to balance the desired...the utility of the operational pause is as a method for balancing ends and means in a controlled relationship to one’s culminating point.79 This again... effects and ordering them in time and space to produce conditions that contribute to Operational success. This study approaches this investigation from

  18. Applications of Generalized Derivatives to Viscoelasticity.

    DTIC Science & Technology

    1979-11-01

    Integration Used to Evaluate the Inverse Transform 78 B-i Schematic of the Half-Space of Newtonian Fluid Bounded by a "Wetted" Surface 96 C-I The...of the response at discrete frequencies. The inverse transform of the response is evaluated numerically to produce the time history. The major drawback...of this method is the arduous task of calculating the inverse transform for every point in time at which the value of the response is required. The

  19. MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.

    PubMed

    Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram

    2015-11-01

    We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.

  20. Instantaneous electron beam emittance measurement system based on the optical transition radiation principle

    NASA Astrophysics Data System (ADS)

    Jiang, Xiao-Guo; Wang, Yuan; Zhang, Kai-Zhi; Yang, Guo-Jun; Shi, Jin-Shui; Deng, Jian-Jun; Li, Jin

    2014-01-01

    One kind of instantaneous electron beam emittance measurement system based on the optical transition radiation principle and double imaging optical method has been set up. It is mainly adopted in the test for the intense electron-beam produced by a linear induction accelerator. The system features two characteristics. The first one concerns the system synchronization signal triggered by the following edge of the main output waveform from a Blumlein switch. The synchronous precision of about 1 ns between the electron beam and the image capture time can be reached in this way so that the electron beam emittance at the desired time point can be obtained. The other advantage of the system is the ability to obtain the beam spot and beam divergence in one measurement so that the calculated result is the true beam emittance at that time, which can explain the electron beam condition. It provides to be a powerful beam diagnostic method for a 2.5 kA, 18.5 MeV, 90 ns (FWHM) electron beam pulse produced by Dragon I. The ability of the instantaneous measurement is about 3 ns and it can measure the beam emittance at any time point during one beam pulse. A series of beam emittances have been obtained for Dragon I. The typical beam spot is 9.0 mm (FWHM) in diameter and the corresponding beam divergence is about 10.5 mrad.

  1. Dew point fast measurement in organic vapor mixtures using quartz resonant sensor

    NASA Astrophysics Data System (ADS)

    Nie, Jing; Liu, Jia; Meng, Xiaofeng

    2017-01-01

    A fast dew point sensor has been developed for organic vapor mixtures by using the quartz crystal with sensitive circuits. The sensor consists of the quartz crystal and a cooler device. Proactive approach is taken to produce condensation on the surface of the quartz crystal, and it will lead to a change in electrical features of the quartz crystal. The cessation of oscillation was measured because this phenomenon is caused by dew condensation. Such a phenomenon can be used to detect the dew point. This method exploits the high sensitivity of the quartz crystal but without frequency measurement and also retains the stability of the resonant circuit. It is strongly anti-interfered. Its performance was evaluated with acetone-methanol mixtures under different pressures. The results were compared with the dew points predicted from the universal quasi-chemical equation to evaluate the performance of the proposed sensor. Though the maximum deviations of the sensor are less than 1.1 °C, it still has a fast response time with a recovery time of less than 10 s, providing an excellent dehumidifying performance.

  2. Dew point fast measurement in organic vapor mixtures using quartz resonant sensor.

    PubMed

    Nie, Jing; Liu, Jia; Meng, Xiaofeng

    2017-01-01

    A fast dew point sensor has been developed for organic vapor mixtures by using the quartz crystal with sensitive circuits. The sensor consists of the quartz crystal and a cooler device. Proactive approach is taken to produce condensation on the surface of the quartz crystal, and it will lead to a change in electrical features of the quartz crystal. The cessation of oscillation was measured because this phenomenon is caused by dew condensation. Such a phenomenon can be used to detect the dew point. This method exploits the high sensitivity of the quartz crystal but without frequency measurement and also retains the stability of the resonant circuit. It is strongly anti-interfered. Its performance was evaluated with acetone-methanol mixtures under different pressures. The results were compared with the dew points predicted from the universal quasi-chemical equation to evaluate the performance of the proposed sensor. Though the maximum deviations of the sensor are less than 1.1 °C, it still has a fast response time with a recovery time of less than 10 s, providing an excellent dehumidifying performance.

  3. Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael

    2014-09-01

    In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.

  4. Coalescent Inference Using Serially Sampled, High-Throughput Sequencing Data from Intrahost HIV Infection

    PubMed Central

    Dialdestoro, Kevin; Sibbesen, Jonas Andreas; Maretty, Lasse; Raghwani, Jayna; Gall, Astrid; Kellam, Paul; Pybus, Oliver G.; Hein, Jotun; Jenkins, Paul A.

    2016-01-01

    Human immunodeficiency virus (HIV) is a rapidly evolving pathogen that causes chronic infections, so genetic diversity within a single infection can be very high. High-throughput “deep” sequencing can now measure this diversity in unprecedented detail, particularly since it can be performed at different time points during an infection, and this offers a potentially powerful way to infer the evolutionary dynamics of the intrahost viral population. However, population genomic inference from HIV sequence data is challenging because of high rates of mutation and recombination, rapid demographic changes, and ongoing selective pressures. In this article we develop a new method for inference using HIV deep sequencing data, using an approach based on importance sampling of ancestral recombination graphs under a multilocus coalescent model. The approach further extends recent progress in the approximation of so-called conditional sampling distributions, a quantity of key interest when approximating coalescent likelihoods. The chief novelties of our method are that it is able to infer rates of recombination and mutation, as well as the effective population size, while handling sampling over different time points and missing data without extra computational difficulty. We apply our method to a data set of HIV-1, in which several hundred sequences were obtained from an infected individual at seven time points over 2 years. We find mutation rate and effective population size estimates to be comparable to those produced by the software BEAST. Additionally, our method is able to produce local recombination rate estimates. The software underlying our method, Coalescenator, is freely available. PMID:26857628

  5. Time's arrow: A numerical experiment

    NASA Astrophysics Data System (ADS)

    Fowles, G. Richard

    1994-04-01

    The dependence of time's arrow on initial conditions is illustrated by a numerical example in which plane waves produced by an initial pressure pulse are followed as they are multiply reflected at internal interfaces of a layered medium. Wave interactions at interfaces are shown to be analogous to the retarded and advanced waves of point sources. The model is linear and the calculation is exact and demonstrably time reversible; nevertheless the results show most of the features expected of a macroscopically irreversible system, including the approach to the Maxwell-Boltzmann distribution, ergodicity, and concomitant entropy increase.

  6. RADIATION WAVE DETECTION

    DOEpatents

    Wouters, L.F.

    1960-08-30

    Radiation waves can be detected by simultaneously measuring radiation- wave intensities at a plurality of space-distributed points and producing therefrom a plot of the wave intensity as a function of time. To this end. a detector system is provided which includes a plurality of nuclear radiation intensity detectors spaced at equal radial increments of distance from a source of nuclear radiation. Means are provided to simultaneously sensitize the detectors at the instant a wave of radiation traverses their positions. the detectors producing electrical pulses indicative of wave intensity. The system further includes means for delaying the pulses from the detectors by amounts proportional to the distance of the detectors from the source to provide an indication of radiation-wave intensity as a function of time.

  7. Modal control of an unstable periodic orbit

    NASA Astrophysics Data System (ADS)

    Wiesel, W.; Shelton, W.

    1983-03-01

    Floquet theory is applied to the problem of designing a control system for a satellite in an unstable periodic orbit. Expansion about a periodic orbit produces a time-periodic linear system, which is augmented by a time-periodic control term. It is shown that this can be done such that (1) the application of control produces only inertial accelerations, (2) positive real Poincareexponents are shifted into the left half-plane, and (3) the shift of the exponent is linear with control gain. These developments are applied to an unstable orbit near the earth-moon L(3) point pertubed by the sun. Finally, it is shown that the control theory can be extended to include first order perturbations about the periodic orbit without increase in control cost.

  8. Modal control of an unstable periodic orbit

    NASA Technical Reports Server (NTRS)

    Wiesel, W.; Shelton, W.

    1983-01-01

    Floquet theory is applied to the problem of designing a control system for a satellite in an unstable periodic orbit. Expansion about a periodic orbit produces a time-periodic linear system, which is augmented by a time-periodic control term. It is shown that this can be done such that (1) the application of control produces only inertial accelerations, (2) positive real Poincareexponents are shifted into the left half-plane, and (3) the shift of the exponent is linear with control gain. These developments are applied to an unstable orbit near the earth-moon L(3) point pertubed by the sun. Finally, it is shown that the control theory can be extended to include first order perturbations about the periodic orbit without increase in control cost.

  9. Total Quality Management and Media Services: The Deming Method.

    ERIC Educational Resources Information Center

    Richie, Mark L.

    1992-01-01

    W. Edwards Deming built a 40-year record of quality management in Japan known as Total Quality Management (TQM). His 14 points require a change in the belief system of managers and media directors, but their implementation in government agencies and schools will produce increased time for better services, better communications, and new programs.…

  10. X-ray shearing interferometer

    DOEpatents

    Koch, Jeffrey A [Livermore, CA

    2003-07-08

    An x-ray interferometer for analyzing high density plasmas and optically opaque materials includes a point-like x-ray source for providing a broadband x-ray source. The x-rays are directed through a target material and then are reflected by a high-quality ellipsoidally-bent imaging crystal to a diffraction grating disposed at 1.times. magnification. A spherically-bent imaging crystal is employed when the x-rays that are incident on the crystal surface are normal to that surface. The diffraction grating produces multiple beams which interfere with one another to produce an interference pattern which contains information about the target. A detector is disposed at the position of the image of the target produced by the interfering beams.

  11. DEMONSTRATION AND CHARACTERIZATION OF TWO DISTINCT HUMAN LEUKOCYTIC PYROGENS

    PubMed Central

    Dinarello, Charles A.; Goldin, Nathan P.; Wolff, Sheldon M.

    1974-01-01

    Human monocytes and neutrophils were separated from buffy coats of blood obtained from normal donors. Following incubation with heat-killed staphylococci, monocyte preparations contained 20 times more pyrogenic activity in the supernatant media than did supernates from an equal number of neutrophils. During purification of these pyrogens it was discovered that these cell preparations each produced a distinct and different pyrogen. The pyrogen obtained from neutrophils had a mol wt of 15,000 following Sephadex G-75 gel filtration, an isoelectric point of 6.9, and could be precipitated and recovered from 50% ethanol at –10°C. In contrast, the pyrogen derived from monocyte preparations had a mol wt of 38,000, an isoelectric point of 5.1, and was destroyed in cold ethanol. Both molecules were unaffected by viral neuraminidase but biologically destroyed at 80°C for 20 min and with trypsin at pH 8.0. The febrile peak produced by partially purified neutrophil pyrogen occurred at 40 min while that from monocytes was at 60 min. In addition, monocyte pyrogen produced more sustained fevers for the same peak elevation as neutrophil pyrogen. These studies demonstrate for the first time two chemically and biologically distinctive pyrogens derived from circulating human white blood cells and have important implications for our understanding of the pathogenesis of fever in man. PMID:4829934

  12. Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping.

    NASA Astrophysics Data System (ADS)

    Hamalainen, Sampsa; Geng, Xiaoyuan; He, Juanxia

    2017-04-01

    Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping. Sampsa Hamalainen, Xiaoyuan Geng, and Juanxia, He. AAFC - Agriculture and Agr-Food Canada, Ottawa, Canada. The Latin Hypercube Sampling (LHS) approach to assist with Digital Soil Mapping has been developed for some time now, however the purpose of this work was to complement LHS with use of multiple spatial resolutions of covariate datasets and variability in the range of sampling points produced. This allowed for specific sets of LHS points to be produced to fulfil the needs of various partners from multiple projects working in the Ontario and Prince Edward Island provinces of Canada. Secondary soil and environmental attributes are critical inputs that are required in the development of sampling points by LHS. These include a required Digital Elevation Model (DEM) and subsequent covariate datasets produced as a result of a Digital Terrain Analysis performed on the DEM. These additional covariates often include but are not limited to Topographic Wetness Index (TWI), Length-Slope (LS) Factor, and Slope which are continuous data. The range of specific points created in LHS included 50 - 200 depending on the size of the watershed and more importantly the number of soil types found within. The spatial resolution of covariates included within the work ranged from 5 - 30 m. The iterations within the LHS sampling were run at an optimal level so the LHS model provided a good spatial representation of the environmental attributes within the watershed. Also, additional covariates were included in the Latin Hypercube Sampling approach which is categorical in nature such as external Surficial Geology data. Some initial results of the work include using a 1000 iteration variable within the LHS model. 1000 iterations was consistently a reasonable value used to produce sampling points that provided a good spatial representation of the environmental attributes. When working within the same spatial resolution for covariates, however only modifying the desired number of sampling points produced, the change of point location portrayed a strong geospatial relationship when using continuous data. Access to agricultural fields and adjacent land uses is often "pinned" as the greatest deterrent to performing soil sampling for both soil survey and soil attribute validation work. The lack of access can be a result of poor road access and/or difficult geographical conditions to navigate for field work individuals. This seems a simple yet continuous issue to overcome for the scientific community and in particular, soils professionals. The ability to assist with the ease of access to sampling points will be in the future a contribution to the Latin Hypercube Sampling (LHS) approach. By removing all locations in the initial instance from the DEM, the LHS model can be restricted to locations only with access from the adjacent road or trail. To further the approach, a road network geospatial dataset can be included within spatial Geographic Information Systems (GIS) applications to access already produced points using a shortest-distance network method.

  13. Method for producing high surface area chromia materials for catalysis

    DOEpatents

    Gash, Alexander E [Brentwood, CA; Satcher, Joe [Patterson, CA; Tillotson, Thomas [Tracy, CA; Hrubesh, Lawrence [Pleasanton, CA; Simpson, Randall [Livermore, CA

    2007-05-01

    Nanostructured chromium(III)-oxide-based materials using sol-gel processing and a synthetic route for producing such materials are disclosed herein. Monolithic aerogels and xerogels having surface areas between 150 m.sup.2/g and 520 m.sup.2/g have been produced. The synthetic method employs the use of stable and inexpensive hydrated-chromium(III) inorganic salts and common solvents such as water, ethanol, methanol, 1-propanol, t-butanol, 2-ethoxy ethanol, and ethylene glycol, DMSO, and dimethyl formamide. The synthesis involves the dissolution of the metal salt in a solvent followed by an addition of a proton scavenger, such as an epoxide, which induces gel formation in a timely manner. Both critical point (supercritical extraction) and atmospheric (low temperature evaporation) drying may be employed to produce monolithic aerogels and xerogels, respectively.

  14. Rapid mapping of ultrafine fault zone topography with structure from motion

    USGS Publications Warehouse

    Johnson, Kendra; Nissen, Edwin; Saripalli, Srikanth; Arrowsmith, J. Ramón; McGarey, Patrick; Scharer, Katherine M.; Williams, Patrick; Blisniuk, Kimberly

    2014-01-01

    Structure from Motion (SfM) generates high-resolution topography and coregistered texture (color) from an unstructured set of overlapping photographs taken from varying viewpoints, overcoming many of the cost, time, and logistical limitations of Light Detection and Ranging (LiDAR) and other topographic surveying methods. This paper provides the first investigation of SfM as a tool for mapping fault zone topography in areas of sparse or low-lying vegetation. First, we present a simple, affordable SfM workflow, based on an unmanned helium balloon or motorized glider, an inexpensive camera, and semiautomated software. Second, we illustrate the system at two sites on southern California faults covered by existing airborne or terrestrial LiDAR, enabling a comparative assessment of SfM topography resolution and precision. At the first site, an ∼0.1 km2 alluvial fan on the San Andreas fault, a colored point cloud of density mostly >700 points/m2 and a 3 cm digital elevation model (DEM) and orthophoto were produced from 233 photos collected ∼50 m above ground level. When a few global positioning system ground control points are incorporated, closest point vertical distances to the much sparser (∼4 points/m2) airborne LiDAR point cloud are mostly 530 points/m2 and a 2 cm DEM and orthophoto were produced from 450 photos taken from ∼60 m above ground level. Closest point vertical distances to existing terrestrial LiDAR data of comparable density are mostly <6 cm. Each SfM survey took ∼2 h to complete and several hours to generate the scene topography and texture. SfM greatly facilitates the imaging of subtle geomorphic offsets related to past earthquakes as well as rapid response mapping or long-term monitoring of faulted landscapes.

  15. Crossfit analysis: a novel method to characterize the dynamics of induced plant responses.

    PubMed

    Jansen, Jeroen J; van Dam, Nicole M; Hoefsloot, Huub C J; Smilde, Age K

    2009-12-16

    Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples.

  16. Crossfit analysis: a novel method to characterize the dynamics of induced plant responses

    PubMed Central

    2009-01-01

    Background Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. Results This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Conclusions Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples. PMID:20015363

  17. Observations of the variability of coronal bright points by the Soft X-ray Telescope on Yohkoh

    NASA Technical Reports Server (NTRS)

    Strong, Keith T.; Harvey, Karen; Hirayama, Tadashi; Nitta, Nariaki; Shimizu, Toshifumi; Tsuneta, Saku

    1992-01-01

    We present the initial results of a study of X-ray bright points (XBPs) made with data from the Yohkoh Soft X-ray Telescope. High temporal and spatial resolution observations of several XBPs illustrate their intensity variability over a wide variety of time scales from a few minutes to hours, as well as rapid changes in their morphology. Several XBPs produced flares during their lifetime. These XBP flares often involve magnetic loops, which are considerably larger than the XBP itself, and which brighten along their lengths at speeds of up to 1100 km/s.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khrennikov, Andrei; Volovich, Yaroslav

    We analyze dynamical consequences of a conjecture that there exists a fundamental (indivisible) quant of time. In particular we study the problem of discrete energy levels of hydrogen atom. We are able to reconstruct potential which in discrete time formalism leads to energy levels of unperturbed hydrogen atom. We also consider linear energy levels of quantum harmonic oscillator and show how they are produced in the discrete time formalism. More generally, we show that in discrete time formalism finite motion in central potential leads to discrete energy spectrum, the property which is common for quantum mechanical theory. Thus deterministic (butmore » discrete time{exclamation_point}) dynamics is compatible with discrete energy levels.« less

  19. Generation of High Frequency Response in a Dynamically Loaded, Nonlinear Soil Column

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, Robert Edward; Coleman, Justin Leigh

    2015-08-01

    Detailed guidance on linear seismic analysis of soil columns is provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998),” which is currently under revision. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain analysis which includes evaluation of soil columns. When performing linear analysis, a given soil column is typically evaluated with a linear, viscous damped constitutive model. When submitted to a sine wave motion, this constitutive model produces a smooth hysteresis loop. For nonlinear analysis, the soil column can be modelled with an appropriate nonlinear hysteretic soilmore » model. For the model in this paper, the stiffness and energy absorption result from a defined post yielding shear stress versus shear strain curve. This curve is input with tabular data points. When submitted to a sine wave motion, this constitutive model produces a hysteresis loop that looks similar in shape to the input tabular data points on the sides with discontinuous, pointed ends. This paper compares linear and nonlinear soil column results. The results show that the nonlinear analysis produces additional high frequency response. The paper provides additional study to establish what portion of the high frequency response is due to numerical noise associated with the tabular input curve and what portion is accurately caused by the pointed ends of the hysteresis loop. Finally, the paper shows how the results are changed when a significant structural mass is added to the top of the soil column.« less

  20. Architectural Heritage Documentation by Using Low Cost Uav with Fisheye Lens: Otag-I Humayun in Istanbul as a Case Study

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Özerdem, Ö. Z.

    2017-11-01

    The digital documentation of architectural heritage is important for monitoring, preserving, managing as well as 3B BIM modelling, time-space VR (virtual reality) applications. The unmanned aerial vehicles (UAVs) have been widely used in these application thanks to rapid developments in technology which enable the high resolution images with resolutions in millimeters. Moreover, it has become possible to produce highly accurate 3D point clouds with structure from motion (SfM) and multi-view stereo (MVS), to obtain a surface reconstruction of a realistic 3D architectural heritage model by using high-overlap images and 3D modeling software such as Context capture, Pix4Dmapper, Photoscan. In this study, digital documentation of Otag-i Humayun (The Ottoman Empire Sultan's Summer Palace) located in Davutpaşa, Istanbul/Turkey is aimed using low cost UAV. The data collections have been made with low cost UAS 3DR Solo UAV with GoPro Hero 4 with fisheye lens. The data processing was accomplished by using commercial Pix4D software. The dense point clouds, a true orthophoto and 3D solid model of the Otag-i Humayun were produced results. The quality check of the produced point clouds has been performed. The obtained result from Otag-i Humayun in Istanbul proved that, the low cost UAV with fisheye lens can be successfully used for architectural heritage documentation.

  1. Active and Passive Sensing from Geosynchronous and Libration Orbits

    NASA Technical Reports Server (NTRS)

    Schoeberl, Mark; Raymond, Carol; Hildebrand, Peter

    2003-01-01

    The development of the LEO (EOS) missions has led the way to new technologies and new science discoveries. However, LEO measurements alone cannot cost effectively produce high time resolution measurements needed to move the science to the next level. Both GEO and the Lagrange points, L1 and L2, provide vantage points that will allow higher time resolution measurements. GEO is currently being exploited by weather satellites, but the sensors currently operating at GEO do not provide the spatial or spectral resolution needed for atmospheric trace gas, ocean or land surface measurements. It is also may be possible to place active sensors in geostationary orbit. It seems clear, that the next era in earth observation and discovery will be opened by sensor systems operating beyond near earth orbit.

  2. Models of torsades de pointes: effects of FPL64176, DPI201106, dofetilide, and chromanol 293B in isolated rabbit and guinea pig hearts.

    PubMed

    Cheng, Hsien C; Incardona, Josephine

    2009-01-01

    For studying the torsades de pointes (TdP) liability of a compound, most high and medium throughput methods use surrogate markers such as HERG inhibition and QT prolongation. In this study, we have tested whether isolated hearts may be modified to allow TdP to be the direct readout. Isolated spontaneously beating rabbit and guinea pig hearts were perfused according to the Langendorff method in hypokalemic (2.1 mM) solution. The in vitro lead II ECG equivalent and the incidence of TdP were monitored for 1 h. In addition, heart rate, QTc, Tp-Te, short-term variability (STV), time to arrhythmia, and time to TdP were also analyzed. FPL64176, a calcium channel activator; and DPI201106, a sodium channel inactivation inhibitor, produced TdP in isolated rabbit and guinea pig hearts in a concentration dependent manner; guinea pig hearts were 3- to 5-fold more sensitive than rabbit hearts. Both compounds also increased QTc and STV. In contrast, dofetilide, an IKr inhibitor, produced no (or a low incidence of) TdP in both species, in spite of prolongation of QTc intervals. Chromanol 293B, an IKs inhibitor, did not produce TdP in rabbit hearts but elicited TdP concentration dependently in guinea pig hearts even though the compound had no effect on QTc intervals. IKs inhibition appears to be more likely to produce TdP in isolated guinea pig hearts than IKr inhibition. Chromanol 293B did not produce TdP in rabbit hearts presumably due to a low level of IKs channels in the heart. TdP produced in this study was consistent with the notion that its production was a consequence of reduced repolarization reserve, thereby causing rhythmic abnormalities. This isolated, perfused, and spontaneously beating rabbit and guinea pig heart preparation in hypokalemic medium may be useful as a preclinical test model for studying proarrhythmic liability of compounds in new drug development.

  3. In vivo burn diagnosis by camera-phone diffuse reflectance laser speckle detection.

    PubMed

    Ragol, S; Remer, I; Shoham, Y; Hazan, S; Willenz, U; Sinelnikov, I; Dronov, V; Rosenberg, L; Bilenca, A

    2016-01-01

    Burn diagnosis using laser speckle light typically employs widefield illumination of the burn region to produce two-dimensional speckle patterns from light backscattered from the entire irradiated tissue volume. Analysis of speckle contrast in these time-integrated patterns can then provide information on burn severity. Here, by contrast, we use point illumination to generate diffuse reflectance laser speckle patterns of the burn. By examining spatiotemporal fluctuations in these time-integrated patterns along the radial direction from the incident point beam, we show the ability to distinguish partial-thickness burns in a porcine model in vivo within the first 24 hours post-burn. Furthermore, our findings suggest that time-integrated diffuse reflectance laser speckle can be useful for monitoring burn healing over time post-burn. Unlike conventional diffuse reflectance laser speckle detection systems that utilize scientific or industrial-grade cameras, our system is designed with a camera-phone, demonstrating the potential for burn diagnosis with a simple imager.

  4. Interpolation of longitudinal shape and image data via optimal mass transport

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen

    2014-03-01

    Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.

  5. In vivo burn diagnosis by camera-phone diffuse reflectance laser speckle detection

    PubMed Central

    Ragol, S.; Remer, I.; Shoham, Y.; Hazan, S.; Willenz, U.; Sinelnikov, I.; Dronov, V.; Rosenberg, L.; Bilenca, A.

    2015-01-01

    Burn diagnosis using laser speckle light typically employs widefield illumination of the burn region to produce two-dimensional speckle patterns from light backscattered from the entire irradiated tissue volume. Analysis of speckle contrast in these time-integrated patterns can then provide information on burn severity. Here, by contrast, we use point illumination to generate diffuse reflectance laser speckle patterns of the burn. By examining spatiotemporal fluctuations in these time-integrated patterns along the radial direction from the incident point beam, we show the ability to distinguish partial-thickness burns in a porcine model in vivo within the first 24 hours post-burn. Furthermore, our findings suggest that time-integrated diffuse reflectance laser speckle can be useful for monitoring burn healing over time post-burn. Unlike conventional diffuse reflectance laser speckle detection systems that utilize scientific or industrial-grade cameras, our system is designed with a camera-phone, demonstrating the potential for burn diagnosis with a simple imager. PMID:26819831

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biasotto, G.; Simoes, A.Z., E-mail: alezipo@yahoo.com; Foschini, C.R.

    Highlights: Black-Right-Pointing-Pointer BiFeO{sub 3} (BFO) nanoparticles were grown by hydrothermal microwave method (HTMW). Black-Right-Pointing-Pointer The soaking time is effective in improving phase formation. Black-Right-Pointing-Pointer Rietveld refinement reveals an orthorhombic structure. Black-Right-Pointing-Pointer The observed magnetism of the BFO crystallites is a consequence of particle size. Black-Right-Pointing-Pointer The HTMW is a genuine technique for low temperatures and short times of synthesis. -- Abstract: Hydrothermal microwave method (HTMW) was used to synthesize crystalline bismuth ferrite (BiFeO{sub 3}) nanoparticles (BFO) in the temperature of 180 Degree-Sign C with times ranging from 5 min to 1 h. BFO nanoparticles were characterized by means of X-raymore » analyses, FT-IR, Raman spectroscopy, TG-DTA and FE-SEM. X-ray diffraction results indicated that longer soaking time was benefit to refraining the formation of any impurity phases and growing BFO crystallites into almost single-phase perovskites. Typical FT-IR spectra for BFO nanoparticles presented well defined bands, indicating a substantial short-range order in the system. TG-DTA analyses confirmed the presence of lattice OH{sup -} groups, commonly found in materials obtained by HTMW process. Compared with the conventional solid-state reaction process, submicron BFO crystallites with better homogeneity could be produced at the temperature as low as 180 Degree-Sign C. These results show that the HTMW synthesis route is rapid, cost effective, and could be used as an alternative to obtain BFO nanoparticles in the temperature of 180 Degree-Sign C for 1 h.« less

  7. Trackline and point detection probabilities for acoustic surveys of Cuvier's and Blainville's beaked whales.

    PubMed

    Barlow, Jay; Tyack, Peter L; Johnson, Mark P; Baird, Robin W; Schorr, Gregory S; Andrews, Russel D; Aguilar de Soto, Natacha

    2013-09-01

    Acoustic survey methods can be used to estimate density and abundance using sounds produced by cetaceans and detected using hydrophones if the probability of detection can be estimated. For passive acoustic surveys, probability of detection at zero horizontal distance from a sensor, commonly called g(0), depends on the temporal patterns of vocalizations. Methods to estimate g(0) are developed based on the assumption that a beaked whale will be detected if it is producing regular echolocation clicks directly under or above a hydrophone. Data from acoustic recording tags placed on two species of beaked whales (Cuvier's beaked whale-Ziphius cavirostris and Blainville's beaked whale-Mesoplodon densirostris) are used to directly estimate the percentage of time they produce echolocation clicks. A model of vocal behavior for these species as a function of their diving behavior is applied to other types of dive data (from time-depth recorders and time-depth-transmitting satellite tags) to indirectly determine g(0) in other locations for low ambient noise conditions. Estimates of g(0) for a single instant in time are 0.28 [standard deviation (s.d.) = 0.05] for Cuvier's beaked whale and 0.19 (s.d. = 0.01) for Blainville's beaked whale.

  8. Time-Series Analysis: Assessing the Effects of Multiple Educational Interventions in a Small-Enrollment Course

    NASA Astrophysics Data System (ADS)

    Warren, Aaron R.

    2009-11-01

    Time-series designs are an alternative to pretest-posttest methods that are able to identify and measure the impacts of multiple educational interventions, even for small student populations. Here, we use an instrument employing standard multiple-choice conceptual questions to collect data from students at regular intervals. The questions are modified by asking students to distribute 100 Confidence Points among the options in order to indicate the perceived likelihood of each answer option being the correct one. Tracking the class-averaged ratings for each option produces a set of time-series. ARIMA (autoregressive integrated moving average) analysis is then used to test for, and measure, changes in each series. In particular, it is possible to discern which educational interventions produce significant changes in class performance. Cluster analysis can also identify groups of students whose ratings evolve in similar ways. A brief overview of our methods and an example are presented.

  9. Rice farming in Bali: organic production and marketing challenges.

    PubMed

    MacRae, Graeme

    2011-01-01

    All is not well with agriculture in Southeast Asia. The productivity gains of the Green Revolution have slowed and even reversed and environmental problems and shortages of water and land are evident. At the same time changing world markets are shifting the dynamics of national agricultural economies. But from the point of view of farmers themselves, it is their season-to-season economic survival that is at stake. Bali is in some ways typical of other agricultural areas in the region, but it is also a special case because of its distinctive economic and cultural environment dominated by tourism. In this environment, farmers are doubly marginalized. At the same time the island offers them unique market opportunities for premium and organic produce. This article examines the ways in which these opportunities have been approached and describes their varying degrees of success. It focuses especially on one project that has been successful in reducing production costs by conversion to organic production, but less so in marketing its produce. It argues finally for the need for integrated studies of the entire rice production/marketing complex, especially from the bottom-up point of view of farmers.

  10. Evaluation of Fiber Bragg Grating and Distributed Optical Fiber Temperature Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCary, Kelly Marie

    Fiber optic temperature sensors were evaluated in the High Temperature Test Lab (HTTL) to determine the accuracy of the measurements at various temperatures. A distributed temperature sensor was evaluated up to 550C and a fiber Bragg grating sensor was evaluated up to 750C. HTTL measurements indicate that there is a drift in fiber Bragg sensor over time of approximately -10C with higher accuracy at temperatures above 300C. The distributed sensor produced some bad data points at and above 500C but produced measurements with less than 2% error at increasing temperatures up to 400C

  11. Unsteady steady-states: Central causes of unintentional force drift

    PubMed Central

    Ambike, Satyajit; Mattos, Daniela; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2016-01-01

    We applied the theory of synergies to analyze the processes that lead to unintentional decline in isometric fingertip force when visual feedback of the produced force is removed. We tracked the changes in hypothetical control variables involved in single fingertip force production based on the equilibrium-point hypothesis, namely, the fingertip referent coordinate (RFT) and its apparent stiffness (CFT). The system's state is defined by a point in the {RFT; CFT} space. We tested the hypothesis that, after visual feedback removal, this point (1) moves along directions leading to drop in the output fingertip force, and (2) has even greater motion along directions that leaves the force unchanged. Subjects produced a prescribed fingertip force using visual feedback, and attempted to maintain this force for 15 s after the feedback was removed. We used the “inverse piano” apparatus to apply small and smooth positional perturbations to fingers at various times after visual feedback removal. The time courses of RFT and CFT showed that force drop was mostly due to a drift in RFT towards the actual fingertip position. Three analysis techniques, namely, hyperbolic regression, surrogate data analysis, and computation of motor-equivalent and non-motor-equivalent motions, suggested strong co-variation in RFT and CFT stabilizing the force magnitude. Finally, the changes in the two hypothetical control variables {RFT; CFT} relative to their average trends also displayed covariation. On the whole the findings suggest that unintentional force drop is associated with (a) a slow drift of the referent coordinate that pulls the system towards a low-energy state, and (b) a faster synergic motion of RFT and CFT that tends to stabilize the output fingertip force about the slowly-drifting equilibrium point. PMID:27540726

  12. Unsteady steady-states: central causes of unintentional force drift.

    PubMed

    Ambike, Satyajit; Mattos, Daniela; Zatsiorsky, Vladimir M; Latash, Mark L

    2016-12-01

    We applied the theory of synergies to analyze the processes that lead to unintentional decline in isometric fingertip force when visual feedback of the produced force is removed. We tracked the changes in hypothetical control variables involved in single fingertip force production based on the equilibrium-point hypothesis, namely the fingertip referent coordinate (R FT ) and its apparent stiffness (C FT ). The system's state is defined by a point in the {R FT ; C FT } space. We tested the hypothesis that, after visual feedback removal, this point (1) moves along directions leading to drop in the output fingertip force, and (2) has even greater motion along directions that leaves the force unchanged. Subjects produced a prescribed fingertip force using visual feedback and attempted to maintain this force for 15 s after the feedback was removed. We used the "inverse piano" apparatus to apply small and smooth positional perturbations to fingers at various times after visual feedback removal. The time courses of R FT and C FT showed that force drop was mostly due to a drift in R FT toward the actual fingertip position. Three analysis techniques, namely hyperbolic regression, surrogate data analysis, and computation of motor-equivalent and non-motor-equivalent motions, suggested strong covariation in R FT and C FT stabilizing the force magnitude. Finally, the changes in the two hypothetical control variables {R FT ; C FT } relative to their average trends also displayed covariation. On the whole, the findings suggest that unintentional force drop is associated with (a) a slow drift of the referent coordinate that pulls the system toward a low-energy state and (b) a faster synergic motion of R FT and C FT that tends to stabilize the output fingertip force about the slowly drifting equilibrium point.

  13. Accuracy of heart rate variability estimation by photoplethysmography using an smartphone: Processing optimization and fiducial point selection.

    PubMed

    Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A

    2015-08-01

    This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.

  14. Point-to-point connectivity prediction in porous media using percolation theory

    NASA Astrophysics Data System (ADS)

    Tavagh-Mohammadi, Behnam; Masihi, Mohsen; Ganjeh-Ghazvini, Mostafa

    2016-10-01

    The connectivity between two points in porous media is important for evaluating hydrocarbon recovery in underground reservoirs or toxic migration in waste disposal. For example, the connectivity between a producer and an injector in a hydrocarbon reservoir impact the fluid dispersion throughout the system. The conventional approach, flow simulation, is computationally very expensive and time consuming. Alternative method employs percolation theory. Classical percolation approach investigates the connectivity between two lines (representing the wells) in 2D cross sectional models whereas we look for the connectivity between two points (representing the wells) in 2D aerial models. In this study, site percolation is used to determine the fraction of permeable regions connected between two cells at various occupancy probabilities and system sizes. The master curves of mean connectivity and its uncertainty are then generated by finite size scaling. The results help to predict well-to-well connectivity without need to any further simulation.

  15. Relationship between team assists and win-loss record in The National Basketball Association.

    PubMed

    Melnick, M J

    2001-04-01

    Using research methodology for analysis of secondary data, statistical data for five National Basketball Association (NBA) seasons (1993-1994 to 1997-1998) were examined to test for a relationship between team assists (a behavioral measure of teamwork) and win-loss record. Rank-difference correlation indicated a significant relationship between the two variables, the coefficients ranging from .42 to .71. Team assist totals produced higher correlations with win-loss record than assist totals for the five players receiving the most playing time ("the starters"). A comparison of "assisted team points" and "unassisted team points" in relationship to win-loss record favored the former and strongly suggested that how a basketball team scores points is more important than the number of points it scores. These findings provide circumstantial support for the popular dictum in competitive team sports that "Teamwork Means Success-Work Together, Win Together."

  16. Progressive Classification Using Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user can halt this reclassification process at any point, thereby obtaining the best possible result for a given amount of computation time. Alternatively, the results can be displayed as they are generated, providing the user with real-time feedback about the current accuracy of classification.

  17. Non invasive transcostal focusing based on the decomposition of the time reversal operator: in vitro validation

    NASA Astrophysics Data System (ADS)

    Cochard, Étienne; Prada, Claire; Aubry, Jean-François; Fink, Mathias

    2010-03-01

    Thermal ablation induced by high intensity focused ultrasound has produced promising clinical results to treat hepatocarcinoma and other liver tumors. However skin burns have been reported due to the high absorption of ultrasonic energy by the ribs. This study proposes a method to produce an acoustic field focusing on a chosen target while sparing the ribs, using the decomposition of the time-reversal operator (DORT method). The idea is to apply an excitation weight vector to the transducers array which is orthogonal to the subspace of emissions focusing on the ribs. The ratio of the energies absorbed at the focal point and on the ribs has been enhanced up to 100-fold as demonstrated by the measured specific absorption rates.

  18. Time efficient Gabor fused master slave optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Cernat, Ramona; Bradu, Adrian; Rivet, Sylvain; Podoleanu, Adrian

    2018-02-01

    In this paper the benefits in terms of operation time that Master/Slave (MS) implementation of optical coherence tomography can bring in comparison to Gabor fused (GF) employing conventional fast Fourier transform based OCT are presented. The Gabor Fusion/Master Slave Optical Coherence Tomography architecture proposed here does not need any data stitching. Instead, a subset of en-face images is produced for each focus position inside the sample to be imaged, using a reduced number of theoretically inferred Master masks. These en-face images are then assembled into a final volume. When the channelled spectra are digitized into 1024 sampling points, and more than 4 focus positions are required to produce the final volume, the Master Slave implementation of the instrument is faster than the conventional fast Fourier transform based procedure.

  19. A quantitative dynamic systems model of health-related quality of life among older adults

    PubMed Central

    Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela

    2015-01-01

    Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722

  20. Resistance spot welding of ultra-fine grained steel sheets produced by constrained groove pressing: Optimization and characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khodabakhshi, F.; Kazeminezhad, M., E-mail: mkazemi@sharif.edu; Kokabi, A.H.

    2012-07-15

    Constrained groove pressing as a severe plastic deformation method is utilized to produce ultra-fine grained low carbon steel sheets. The ultra-fine grained sheets are joined via resistance spot welding process and the characteristics of spot welds are investigated. Resistance spot welding process is optimized for welding of the sheets with different severe deformations and their results are compared with those of as-received samples. The effects of failure mode and expulsion on the performance of ultra-fine grained sheet spot welds have been investigated in the present paper and the welding current and time of resistance spot welding process according to thesemore » subjects are optimized. Failure mode and failure load obtained in tensile-shear test, microhardness, X-ray diffraction, transmission electron microscope and scanning electron microscope images have been used to describe the performance of spot welds. The region between interfacial to pullout mode transition and expulsion limit is defined as the optimum welding condition. The results show that optimum welding parameters (welding current and welding time) for ultra-fine grained sheets are shifted to lower values with respect to those for as-received specimens. In ultra-fine grained sheets, one new region is formed named recrystallized zone in addition to fusion zone, heat affected zone and base metal. It is shown that microstructures of different zones in ultra-fine grained sheets are finer than those of as-received sheets. - Highlights: Black-Right-Pointing-Pointer Resistance spot welding process is optimized for joining of UFG steel sheets. Black-Right-Pointing-Pointer Optimum welding current and time are decreased with increasing the CGP pass number. Black-Right-Pointing-Pointer Microhardness at BM, HAZ, FZ and recrystallized zone is enhanced due to CGP.« less

  1. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    PubMed

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  2. Blood clot detection using magnetic nanoparticles

    NASA Astrophysics Data System (ADS)

    Khurshid, Hafsa; Friedman, Bruce; Berwin, Brent; Shi, Yipeng; Ness, Dylan B.; Weaver, John B.

    2017-05-01

    Deep vein thrombosis, the development of blood clots in the peripheral veins, is a very serious, life threatening condition that is prevalent in the elderly. To deliver proper treatment that enhances the survival rate, it is very important to detect thrombi early and at the point of care. We explored the ability of magnetic particle spectroscopy (MSB) to detect thrombus via specific binding of aptamer functionalized magnetic nanoparticles with the blood clot. MSB uses the harmonics produced by nanoparticles in an alternating magnetic field to measure the rotational freedom and, therefore, the bound state of the nanoparticles. The nanoparticles' relaxation time for Brownian rotation increases when bound [A.M. Rauwerdink and J. B. Weaver, Appl. Phys. Lett. 96, 1 (2010)]. The relaxation time can therefore be used to characterize the nanoparticle binding to thrombin in the blood clot. For longer relaxation times, the approach to saturation is more gradual reducing the higher harmonics and the harmonic ratio. The harmonic ratios of nanoparticles conjugated with anti-thrombin aptamers (ATP) decrease significantly over time with blood clot present in the sample medium, compared with nanoparticles without ATP. Moreover, the blood clot removed from the sample medium produced a significant MSB signal, indicating the nanoparticles are immobilized on the clot. Our results show that MSB could be a very useful non-invasive, quick tool to detect blood clots at the point of care so proper treatment can be used to reduce the risks inherent in deep vein thrombosis.

  3. Using ToxCast data to reconstruct dynamic cell state trajectories and estimate toxicological points of departure.

    EPA Pesticide Factsheets

    Background: High-content imaging (HCI) allows simultaneous measurement of multiple cellular phenotypic changes and is an important tool for evaluating the biological activity of chemicals.Objectives: Our goal was to analyze dynamic cellular changes using HCI to identify the ??tipping point?? at which the cells did not show recovery towards a normal phenotypic state.Methods: HCI was used to evaluate the effects of 967 chemicals (in concentrations ranging from 0.4 to 200 03bcM) on HepG2 cells over a 72-hr exposure period. The HCI end points included p53, c-Jun, histone H2A.x, 03b1-tubulin, histone H3, alpha tubulin, mitochondrial membrane potential, mitochondrial mass, cell cycle arrest, nuclear size, and cell number. A computational model was developed to interpret HCI responses as cell-state trajectories.Results: Analysis of cell-state trajectories showed that 336 chemicals produced tipping points and that HepG2 cells were resilient to the effects of 334 chemicals up to the highest concentration (200 03bcM) and duration (72 hr) tested. Tipping points were identified as concentration-dependent transitions in system recovery, and the corresponding critical concentrations were generally between 5 and 15 times (25th and 75th percentiles, respectively) lower than the concentration that produced any significant effect on HepG2 cells. The remaining 297 chemicals require more data before they can be placed in either of these categories.Conclusions: These findings show t

  4. Reverse engineering gene regulatory networks from measurement with missing values.

    PubMed

    Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong

    2016-12-01

    Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence, more accurate prediction of the underlying GRN compared to when using the conventional Gaussian approximation (GA) filters ignoring the missing data points.

  5. Spectroscopic properties of triangular silver nanoplates immobilized on polyelectrolyte multilayer-modified glass substrates

    NASA Astrophysics Data System (ADS)

    Rabor, Janice B.; Kawamura, Koki; Muko, Daiki; Kurawaki, Junichi; Niidome, Yasuro

    2017-07-01

    Fabrication of surface-immobilized silver nanostructures with reproducible plasmonic properties by dip-coating technique is difficult due to shape alteration. To address this challenge, we used a polyelectrolyte multilayer to promote immobilization of as-received triangular silver nanoplates (TSNP) on a glass substrate through electrostatic interaction. The substrate-immobilized TSNP were characterized by absorption spectrophotometry and scanning electron microscopy. The bandwidth and peak position of localized surface plasmon resonance (LSPR) bands can be tuned by simply varying the concentration of the colloidal solution and immersion time. TSNP immobilized from a higher concentration of colloidal solution with longer immersion time produced broadened LSPR bands in the near-IR region, while a lower concentration with shorter immersion time produced narrower bands in the visible region. The shape of the nanoplates was retained even at long immersion time. Analysis of peak positions and bandwidths also revealed the point at which the main species of the immobilization had been changed from isolates to aggregates.

  6. Imprecise results: Utilizing partial computations in real-time systems

    NASA Technical Reports Server (NTRS)

    Lin, Kwei-Jay; Natarajan, Swaminathan; Liu, Jane W.-S.

    1987-01-01

    In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model.

  7. Lidars for smoke and dust cloud diagnostics

    NASA Astrophysics Data System (ADS)

    Fujimura, S. F.; Warren, R. E.; Lutomirski, R. F.

    1980-11-01

    An algorithm that integrates a time-resolved lidar signature for use in estimating transmittance, extinction coefficient, mass concentration, and CL values generated under battlefield conditions is applied to lidar signatures measured during the DIRT-I tests. Estimates are given for the dependence of the inferred transmittance and extinction coefficient on uncertainties in parameters such as the obscurant backscatter-to-extinction ratio. The enhanced reliability in estimating transmittance through use of a target behind the obscurant cloud is discussed. It is found that the inversion algorithm can produce reliable estimates of smoke or dust transmittance and extinction from all points within the cloud for which a resolvable signal can be detected, and that a single point calibration measurement can convert the extinction values to mass concentration for each resolvable signal point.

  8. IR thermography for the assessment of the thermal conductivity of aluminum alloys

    NASA Astrophysics Data System (ADS)

    Nazarov, S.; Rossi, S.; Bison, P.; Calliari, I.

    2017-05-01

    Aluminium alloys are here considered as a structural material for aerospace applications, guaranteeing lightness and strength at the same time. As aluminium alone is not particularly performing from a mechanical point of view, in this experimental solution it is produced as an alloy with Lithium added at 6 % in weight. To increase furtherly the strength of the material, two new alloys are produced by adding 0.5 % in weight of the rare earth elements Neodymium (Nd) and Yttrium (Y). The improvement of the mechanical properties is measured by means of hardness tests. At the same time the thermophysical properties are measured as well, at various temperature, from 80 °C to 500 °C. Thermal diffusivity is measured by Laser Flash equipment in vacuum. One possible drawback of the Al-Li alloy produced at so high percentage of Li (6 %) is an essential anisotropy that is evaluated by IR thermography thank to its imaging properties that allows to measure simultaneously both the in-plane and through-depth thermal diffusivity.

  9. State-space modeling of population sizes and trends in Nihoa Finch and Millerbird

    USGS Publications Warehouse

    Gorresen, P. Marcos; Brinck, Kevin W.; Camp, Richard J.; Farmer, Chris; Plentovich, Sheldon M.; Banko, Paul C.

    2016-01-01

    Both of the 2 passerines endemic to Nihoa Island, Hawai‘i, USA—the Nihoa Millerbird (Acrocephalus familiaris kingi) and Nihoa Finch (Telespiza ultima)—are listed as endangered by federal and state agencies. Their abundances have been estimated by irregularly implemented fixed-width strip-transect sampling from 1967 to 2012, from which area-based extrapolation of the raw counts produced highly variable abundance estimates for both species. To evaluate an alternative survey method and improve abundance estimates, we conducted variable-distance point-transect sampling between 2010 and 2014. We compared our results to those obtained from strip-transect samples. In addition, we applied state-space models to derive improved estimates of population size and trends from the legacy time series of strip-transect counts. Both species were fairly evenly distributed across Nihoa and occurred in all or nearly all available habitat. Population trends for Nihoa Millerbird were inconclusive because of high within-year variance. Trends for Nihoa Finch were positive, particularly since the early 1990s. Distance-based analysis of point-transect counts produced mean estimates of abundance similar to those from strip-transects but was generally more precise. However, both survey methods produced biologically unrealistic variability between years. State-space modeling of the long-term time series of abundances obtained from strip-transect counts effectively reduced uncertainty in both within- and between-year estimates of population size, and allowed short-term changes in abundance trajectories to be smoothed into a long-term trend.

  10. Rotating Arc Jet Test Model: Time-Accurate Trajectory Heat Flux Replication in a Ground Test Environment

    NASA Technical Reports Server (NTRS)

    Laub, Bernard; Grinstead, Jay; Dyakonov, Artem; Venkatapathy, Ethiraj

    2011-01-01

    Though arc jet testing has been the proven method employed for development testing and certification of TPS and TPS instrumentation, the operational aspects of arc jets limit testing to selected, but constant, conditions. Flight, on the other hand, produces timevarying entry conditions in which the heat flux increases, peaks, and recedes as a vehicle descends through an atmosphere. As a result, we are unable to "test as we fly." Attempts to replicate the time-dependent aerothermal environment of atmospheric entry by varying the arc jet facility operating conditions during a test have proven to be difficult, expensive, and only partially successful. A promising alternative is to rotate the test model exposed to a constant-condition arc jet flow to yield a time-varying test condition at a point on a test article (Fig. 1). The model shape and rotation rate can be engineered so that the heat flux at a point on the model replicates the predicted profile for a particular point on a flight vehicle. This simple concept will enable, for example, calibration of the TPS sensors on the Mars Science Laboratory (MSL) aeroshell for anticipated flight environments.

  11. Birth-death models and coalescent point processes: the shape and probability of reconstructed phylogenies.

    PubMed

    Lambert, Amaury; Stadler, Tanja

    2013-12-01

    Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Intraocular Pressure Changes With Positioning During Laparoscopy

    PubMed Central

    Onakpoya, Oluwatoyin H.; Adenekan, Anthony T.; Awe, Oluwaseun. O.

    2016-01-01

    Background and Objectives: Pneumoperitoneum during laparoscopy can produce changes in intraocular pressure (IOP) that may be influenced by several factors. In this study, we investigated changes in IOP during laparoscopy with different positioning. Methods: We recruited adult patients without eye disease scheduled to undergo laparoscopic operation requiring a reverse Trendelenburg tilt (rTr; group A; n = 20) or Trendelenburg tilt (Tr; Group B; n = 20). IOP was measured at 7 time points (T1–T7). All procedures were performed with standardized anaesthetic protocol. Mean arterial pressure (MAP), heart rate (HR), peak and plateau airway pressure, and end-tidal carbon dioxide (ETCO2) measurements were taken at each time point. Results: Both groups were similar in age, sex, mean body mass index (BMI), duration of surgery, and preoperative IOP. A decrease in IOP was observed in both groups after induction of anaesthesia (T2), whereas induction of pneumoperitoneum produced a mild increase in IOP (T3) in both groups. The Trendelenburg tilt produced IOP elevations in 80% of patients compared to 45% after the reverse Trendelenburg tilt (P = .012). A significant IOP increase of 5 mm Hg or more was recorded in 3 (15%) patients in the Trendelenburg tilt group and in none in the reverse Trendelenburg group. At T7, IOP had returned to preoperative levels in all but 3 (15%) in the Trendelenburg and 1 (5%) in the reverse Trendelenburg group. Reversible changes were observed in the MAP, HR, ETCO2, and airway pressures in both groups. Conclusions: IOP changes induced by laparoscopy are realigned after evacuation of pneumoperitoneum. A Trendelenburg tilt however produced significant changes that may require careful patient monitoring during laparoscopic procedures. PMID:28028381

  13. Effects of methanol-to-oil ratio, catalyst amount and reaction time on the FAME yield by in situ transesterification of rubber seeds (Hevea brasiliensis)

    NASA Astrophysics Data System (ADS)

    Abdulkadir, Bashir Abubakar; Uemura, Yoshimitsu; Ramli, Anita; Osman, Noridah B.; Kusakabe, Katsuki; Kai, Takami

    2014-10-01

    In this research, biodiesel is produced by in situ transesterification (direct transesterification) method from the rubber seeds using KOH as a catalyst. The influence of methanol to seeds mass ratio, duration of reaction, and catalyst loading was investigated. The result shows that, the best ratio of seeds to methanol is 1:6 (10 g seeds with 60 g methanol), 120 minutes reaction time and catalyst loading of 3.0 g. The maximum FAME yield obtain was 70 %. This findings support FAME production from the seeds of rubber tree using direct transesterifcation method from the seeds of rubber tree as an alternative to diesel fuel. Also, significant properties of biodiesel such as cloud point, density, pour point, specific gravity, and viscosity were investigated.

  14. Diversionary device history and revolutionary advancements.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, Paul W.; Grubelich, Mark Charles

    Diversionary devices also known as flash bangs or stun grenades were first employed about three decades ago. These devices produce a loud bang accompanied by a brilliant flash of light and are employed to temporarily distract or disorient an adversary by overwhelming their visual and auditory senses in order to gain a tactical advantage. Early devices that where employed had numerous shortcomings. Over time, many of these deficiencies were identified and corrected. This evolutionary process led to today's modern diversionary devices. These present-day conventional diversionary devices have undergone evolutionary changes but operate in the same manner as their predecessors. Inmore » order to produce the loud bang and brilliant flash of light, a flash powder mixture, usually a combination of potassium perchlorate and aluminum powder is ignited to produce an explosion. In essence these diversionary devices are small pyrotechnic bombs that produce a high point-source pressure in order to achieve the desired far-field effect. This high point-source pressure can make these devices a hazard to the operator, adversaries and hostages even though they are intended for 'less than lethal' roles. A revolutionary diversionary device has been developed that eliminates this high point-source pressure problem and eliminates the need for the hazardous pyrotechnic flash powder composition. This new diversionary device employs a fuel charge that is expelled and ignited in the atmosphere. This process is similar to a fuel air or thermobaric explosion, except that it is a deflagration, not a detonation, thereby reducing the overpressure hazard. This technology reduces the hazard associated with diversionary devices to all involved with their manufacture, transport and use. An overview of the history of diversionary device development and developments at Sandia National Laboratories will be presented.« less

  15. Delivery and application of precise timing for a traveling wave powerline fault locator system

    NASA Technical Reports Server (NTRS)

    Street, Michael A.

    1990-01-01

    The Bonneville Power Administration (BPA) has successfully operated an in-house developed powerline fault locator system since 1986. The BPA fault locator system consists of remotes installed at cardinal power transmission line system nodes and a central master which polls the remotes for traveling wave time-of-arrival data. A power line fault produces a fast rise-time traveling wave which emanates from the fault point and propagates throughout the power grid. The remotes time-tag the traveling wave leading edge as it passes through the power system cardinal substation nodes. A synchronizing pulse transmitted via the BPA analog microwave system on a wideband channel sychronizes the time-tagging counters in the remote units to a different accuracy of better than one microsecond. The remote units correct the raw time tags for synchronizing pulse propagation delay and return these corrected values to the fault locator master. The master then calculates the power system disturbance source using the collected time tags. The system design objective is a fault location accuracy of 300 meters. BPA's fault locator system operation, error producing phenomena, and method of distributing precise timing are described.

  16. Modeling an enhanced ridesharing system with meet points and time windows

    PubMed Central

    Li, Xin; Hu, Sangen; Deng, Kai

    2018-01-01

    With the rising of e-hailing services in urban areas, ride sharing is becoming a common mode of transportation. This paper presents a mathematical model to design an enhanced ridesharing system with meet points and users’ preferable time windows. The introduction of meet points allows ridesharing operators to trade off the benefits of saving en-route delays and the cost of additional walking for some passengers to be collectively picked up or dropped off. This extension to the traditional door-to-door ridesharing problem brings more operation flexibility in urban areas (where potential requests may be densely distributed in neighborhood), and thus could achieve better system performance in terms of reducing the total travel time and increasing the served passengers. We design and implement a Tabu-based meta-heuristic algorithm to solve the proposed mixed integer linear program (MILP). To evaluate the validation and effectiveness of the proposed model and solution algorithm, several scenarios are designed and also resolved to optimality by CPLEX. Results demonstrate that (i) detailed route plan associated with passenger assignment to meet points can be obtained with en-route delay savings; (ii) as compared to CPLEX, the meta-heuristic algorithm bears the advantage of higher computation efficiency and produces good quality solutions with 8%~15% difference from the global optima; and (iii) introducing meet points to ridesharing system saves the total travel time by 2.7%-3.8% for small-scale ridesharing systems. More benefits are expected for ridesharing systems with large size of fleet. This study provides a new tool to efficiently operate the ridesharing system, particularly when the ride sharing vehicles are in short supply during peak hours. Traffic congestion mitigation will also be expected. PMID:29715302

  17. An automated microfluidic DNA microarray platform for genetic variant detection in inherited arrhythmic diseases.

    PubMed

    Huang, Shu-Hong; Chang, Yu-Shin; Juang, Jyh-Ming Jimmy; Chang, Kai-Wei; Tsai, Mong-Hsun; Lu, Tzu-Pin; Lai, Liang-Chuan; Chuang, Eric Y; Huang, Nien-Tsu

    2018-03-12

    In this study, we developed an automated microfluidic DNA microarray (AMDM) platform for point mutation detection of genetic variants in inherited arrhythmic diseases. The platform allows for automated and programmable reagent sequencing under precise conditions of hybridization flow and temperature control. It is composed of a commercial microfluidic control system, a microfluidic microarray device, and a temperature control unit. The automated and rapid hybridization process can be performed in the AMDM platform using Cy3 labeled oligonucleotide exons of SCN5A genetic DNA, which produces proteins associated with sodium channels abundant in the heart (cardiac) muscle cells. We then introduce a graphene oxide (GO)-assisted DNA microarray hybridization protocol to enable point mutation detection. In this protocol, a GO solution is added after the staining step to quench dyes bound to single-stranded DNA or non-perfectly matched DNA, which can improve point mutation specificity. As proof-of-concept we extracted the wild-type and mutant of exon 12 and exon 17 of SCN5A genetic DNA from patients with long QT syndrome or Brugada syndrome by touchdown PCR and performed a successful point mutation discrimination in the AMDM platform. Overall, the AMDM platform can greatly reduce laborious and time-consuming hybridization steps and prevent potential contamination. Furthermore, by introducing the reciprocating flow into the microchannel during the hybridization process, the total assay time can be reduced to 3 hours, which is 6 times faster than the conventional DNA microarray. Given the automatic assay operation, shorter assay time, and high point mutation discrimination, we believe that the AMDM platform has potential for low-cost, rapid and sensitive genetic testing in a simple and user-friendly manner, which may benefit gene screening in medical practice.

  18. Is there a trade-off between longevity and quality of life in Grossman's pure investment model?

    PubMed

    Eisenring, C

    2000-12-01

    The question is posed whether an individual maximizes lifetime or trades off longevity for quality of life in Grossman's pure investment (PI)-model. It is shown that the answer critically hinges on the assumed production function for healthy time. If the production function for healthy time produces a trade-off between life-span and quality of life, one has to solve a sequence of fixed time problems. The one offering maximal intertemporal utility determines optimal longevity. Comparative static results of optimal longevity for a simplified version of the PI-model are derived. The obtained results predict that higher initial endowments of wealth and health, a rise in the wage rate, or improvements in the technology of producing healthy time, all increase the optimal length of life. On the other hand, optimal longevity is decreasing in the depreciation and interest rate. From a technical point of view, the paper illustrates that a discrete time equivalent to the transversality condition for optimal longevity employed in continuous optimal control models does not exist. Copyright 2000 John Wiley & Sons, Ltd.

  19. SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2013-12-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.

  20. The Role of Interface Shape on the Impact Characteristics and Cranial Fracture Patterns Using the Immature Porcine Head Model,.

    PubMed

    Vaughan, Patrick E; Vogelsberg, Caitlin C M; Vollner, Jennifer M; Fenton, Todd W; Haut, Roger C

    2016-09-01

    The forensic literature suggests that when adolescents fall onto edged and pointed surfaces, depressed fractures can occur at low energy levels. This study documents impact biomechanics and fracture characteristics of infant porcine skulls dropped onto flat, curved, edged, and focal surfaces. Results showed that the energy needed for fracture initiation was nearly four times higher against a flat surface than against the other surfaces. While characteristic measures of fracture such as number and length of fractures did not vary with impact surface shape, the fracture patterns did depend on impact surface shape. While experimental impacts against the flat surface produced linear fractures initiating at sutural boundaries peripheral to the point of impact (POI), more focal impacts produced depressed fractures initiating at the POI. The study supported case-based forensic literature suggesting cranial fracture patterns depend on impact surface shape and that fracture initiation energy is lower for more focal impacts. © 2016 American Academy of Forensic Sciences.

  1. Effect of target color and scanning geometry on terrestrial LiDAR point-cloud noise and plane fitting

    NASA Astrophysics Data System (ADS)

    Bolkas, Dimitrios; Martinez, Aaron

    2018-01-01

    Point-cloud coordinate information derived from terrestrial Light Detection And Ranging (LiDAR) is important for several applications in surveying and civil engineering. Plane fitting and segmentation of target-surfaces is an important step in several applications such as in the monitoring of structures. Reliable parametric modeling and segmentation relies on the underlying quality of the point-cloud. Therefore, understanding how point-cloud errors affect fitting of planes and segmentation is important. Point-cloud intensity, which accompanies the point-cloud data, often goes hand-in-hand with point-cloud noise. This study uses industrial particle boards painted with eight different colors (black, white, grey, red, green, blue, brown, and yellow) and two different sheens (flat and semi-gloss) to explore how noise and plane residuals vary with scanning geometry (i.e., distance and incidence angle) and target-color. Results show that darker colors, such as black and brown, can produce point clouds that are several times noisier than bright targets, such as white. In addition, semi-gloss targets manage to reduce noise in dark targets by about 2-3 times. The study of plane residuals with scanning geometry reveals that, in many of the cases tested, residuals decrease with increasing incidence angles, which can assist in understanding the distribution of plane residuals in a dataset. Finally, a scheme is developed to derive survey guidelines based on the data collected in this experiment. Three examples demonstrate that users should consider instrument specification, required precision of plane residuals, required point-spacing, target-color, and target-sheen, when selecting scanning locations. Outcomes of this study can aid users to select appropriate instrumentation and improve planning of terrestrial LiDAR data-acquisition.

  2. Movies of Finite Deformation within Western North American Plate Boundary Zone

    NASA Astrophysics Data System (ADS)

    Holt, W. E.; Birkes, B.; Richard, G. A.

    2004-12-01

    Animations of finite strain within deforming continental zones can be an important tool for both education and research. We present finite strain models for western North America. We have found that these moving images, which portray plate motions, landform uplift, and subsidence, are highly useful for enabling students to conceptualize the dramatic changes that can occur within plate boundary zones over geologic time. These models use instantaneous rates of strain inferred from both space geodetic observations and Quaternary fault slip rates. Geodetic velocities and Quaternary strain rates are interpolated to define a continuous, instantaneous velocity field for western North America. This velocity field is then used to track topography points and fault locations through time (both backward and forward in time), using small time steps, to produce a 6 million year image. The strain rate solution is updated at each time step, accounting for changes in boundary conditions of plate motion, and changes in fault orientation. Assuming zero volume change, Airy isostasy, and a ratio of erosion rate to tectonic uplift rate, the topography is also calculated as a function of time. The animations provide interesting moving images of the transform boundary, highlighting ongoing extension and subsidence, convergence and uplift, and large translations taking place within the strike-slip regime. Moving images of the strain components, uplift volume through time, and inferred erosion volume through time, have also been produced. These animations are an excellent demonstration for education purposes and also hold potential as an important tool for research enabling the quantification of finite rotations of fault blocks, potential erosion volume, uplift volume, and the influence of climate on these parameters. The models, however, point to numerous shortcomings of taking constraints from instantaneous calculations to provide insight into time evolution and reconstruction models. More rigorous calculations are needed to account for changes in dynamics (body forces) through time and resultant changes in fault behavior and crustal rheology.

  3. Digital SAR processing using a fast polynomial transform

    NASA Technical Reports Server (NTRS)

    Butman, S.; Lipes, R.; Rubin, A.; Truong, T. K.

    1981-01-01

    A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network.

  4. [Abuse, dependence and intoxication of substances].

    PubMed

    Wada, Kiyoshi

    2015-09-01

    As for substance-related disorders, there were several differences between ICD-10 and DSM-IV, however, the concept of "dependence" had been essential for both criteria. DSM-5 published in 2013 had erased dependence. This confuses us. It is important to recognize dependence again. "Abuse" is the self-intake behavior of drug against the social norms. Repeated abuse results in dependence. Dependence is a state of loss of control against drug use due to craving. Abuse can produce "acute intoxication", and repeated abuse under dependence can produce "chronic intoxication". It is important to understand abuse, dependence and "intoxication" based on their relationship from the point of time course.

  5. Apples and oranges: don't compare levelized cost of renewables: Joskow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2010-12-15

    MIT Prof. Paul Joskow points out that the levelized metric is inappropriate for comparing intermittent generating technologies like wind and solar with dispatchable generating technologies like nuclear, gas combined cycle, and coal. The levelized comparison fails to take into account differences in the production profiles of intermittent and dispatchable generating technologies and the associated large variations in the market value of the electricity they supply. When the electricity is produced by an intermittent generating technology, the level of output and the value of the electricity at the times when the output is produced are key variables that should be takenmore » into account.« less

  6. Linking gene regulation and the exo-metabolome: A comparative transcriptomics approach to identify genes that impact on the production of volatile aroma compounds in yeast

    PubMed Central

    Rossouw, Debra; Næs, Tormod; Bauer, Florian F

    2008-01-01

    Background 'Omics' tools provide novel opportunities for system-wide analysis of complex cellular functions. Secondary metabolism is an example of a complex network of biochemical pathways, which, although well mapped from a biochemical point of view, is not well understood with regards to its physiological roles and genetic and biochemical regulation. Many of the metabolites produced by this network such as higher alcohols and esters are significant aroma impact compounds in fermentation products, and different yeast strains are known to produce highly divergent aroma profiles. Here, we investigated whether we can predict the impact of specific genes of known or unknown function on this metabolic network by combining whole transcriptome and partial exo-metabolome analysis. Results For this purpose, the gene expression levels of five different industrial wine yeast strains that produce divergent aroma profiles were established at three different time points of alcoholic fermentation in synthetic wine must. A matrix of gene expression data was generated and integrated with the concentrations of volatile aroma compounds measured at the same time points. This relatively unbiased approach to the study of volatile aroma compounds enabled us to identify candidate genes for aroma profile modification. Five of these genes, namely YMR210W, BAT1, AAD10, AAD14 and ACS1 were selected for overexpression in commercial wine yeast, VIN13. Analysis of the data show a statistically significant correlation between the changes in the exo-metabome of the overexpressing strains and the changes that were predicted based on the unbiased alignment of transcriptomic and exo-metabolomic data. Conclusion The data suggest that a comparative transcriptomics and metabolomics approach can be used to identify the metabolic impacts of the expression of individual genes in complex systems, and the amenability of transcriptomic data to direct applications of biotechnological relevance. PMID:18990252

  7. Multibeam 3D Underwater SLAM with Probabilistic Registration.

    PubMed

    Palomer, Albert; Ridao, Pere; Ribas, David

    2016-04-20

    This paper describes a pose-based underwater 3D Simultaneous Localization and Mapping (SLAM) using a multibeam echosounder to produce high consistency underwater maps. The proposed algorithm compounds swath profiles of the seafloor with dead reckoning localization to build surface patches (i.e., point clouds). An Iterative Closest Point (ICP) with a probabilistic implementation is then used to register the point clouds, taking into account their uncertainties. The registration process is divided in two steps: (1) point-to-point association for coarse registration and (2) point-to-plane association for fine registration. The point clouds of the surfaces to be registered are sub-sampled in order to decrease both the computation time and also the potential of falling into local minima during the registration. In addition, a heuristic is used to decrease the complexity of the association step of the ICP from O(n2) to O(n) . The performance of the SLAM framework is tested using two real world datasets: First, a 2.5D bathymetric dataset obtained with the usual down-looking multibeam sonar configuration, and second, a full 3D underwater dataset acquired with a multibeam sonar mounted on a pan and tilt unit.

  8. Ideal evolution of magnetohydrodynamic turbulence when imposing Taylor-Green symmetries.

    PubMed

    Brachet, M E; Bustamante, M D; Krstulovic, G; Mininni, P D; Pouquet, A; Rosenberg, D

    2013-01-01

    We investigate the ideal and incompressible magnetohydrodynamic (MHD) equations in three space dimensions for the development of potentially singular structures. The methodology consists in implementing the fourfold symmetries of the Taylor-Green vortex generalized to MHD, leading to substantial computer time and memory savings at a given resolution; we also use a regridding method that allows for lower-resolution runs at early times, with no loss of spectral accuracy. One magnetic configuration is examined at an equivalent resolution of 6144(3) points and three different configurations on grids of 4096(3) points. At the highest resolution, two different current and vorticity sheet systems are found to collide, producing two successive accelerations in the development of small scales. At the latest time, a convergence of magnetic field lines to the location of maximum current is probably leading locally to a strong bending and directional variability of such lines. A novel analytical method, based on sharp analysis inequalities, is used to assess the validity of the finite-time singularity scenario. This method allows one to rule out spurious singularities by evaluating the rate at which the logarithmic decrement of the analyticity-strip method goes to zero. The result is that the finite-time singularity scenario cannot be ruled out, and the singularity time could be somewhere between t=2.33 and t=2.70. More robust conclusions will require higher resolution runs and grid-point interpolation measurements of maximum current and vorticity.

  9. Stability of beta-titanium T-loop springs preactivated by gradual curvature

    PubMed Central

    Caldas, Sergei Godeiro Fernandes Rabelo; Martins, Renato Parsekian; de Araújo, Marcela Emílio; Galvão, Marília Regalado; da Silva, Roberto Soares; Martins, Lídia Parsekian

    2017-01-01

    ABSTRACT Objective: Evaluate changes in the force system of T-Loop Springs (TLS) preactivated by curvature, due to stress relaxation. Methods: Ninety TLSs measuring 6 x 10 mm, produced out with 0.017 x 0.025-in TMA® wire and preactived by gradual curvature, were randomly distributed into nine groups according to time point of evaluation. Group 1 was tested immediately after spring preactivation and stress relief, by trial activation. The other eight groups were tested after 24, 48 and 72 hours, 1, 2, 4, 8 and 12 weeks, respectively. Using a moment transducer coupled to a digital extensometer indicator adapted to a universal testing machine, the amount of horizontal force, moment and moment-to-force ratios were recorded at every 0.5 mm of deactivation from 5 mm of the initial activation, in an interbracket distance of 23 mm. Results: The horizontal forces decreased gradually among the groups (p< 0.001) and the moments showed a significant and slow decrease over time among the groups (p< 0.001). All groups produced similar M/F ratios (p= 0.532), with no influence of time. Conclusions: The TLSs preactivated by curvature suffered a gradual deformation over time, which affected the force system, specifically the moments, which affected the horizontal forces produced. PMID:29364381

  10. Stability of beta-titanium T-loop springs preactivated by gradual curvature.

    PubMed

    Caldas, Sergei Godeiro Fernandes Rabelo; Martins, Renato Parsekian; Araújo, Marcela Emílio de; Galvão, Marília Regalado; Silva Júnior, Roberto Soares da; Martins, Lídia Parsekian

    2017-01-01

    Evaluate changes in the force system of T-Loop Springs (TLS) preactivated by curvature, due to stress relaxation. Ninety TLSs measuring 6 x 10 mm, produced out with 0.017 x 0.025-in TMA® wire and preactived by gradual curvature, were randomly distributed into nine groups according to time point of evaluation. Group 1 was tested immediately after spring preactivation and stress relief, by trial activation. The other eight groups were tested after 24, 48 and 72 hours, 1, 2, 4, 8 and 12 weeks, respectively. Using a moment transducer coupled to a digital extensometer indicator adapted to a universal testing machine, the amount of horizontal force, moment and moment-to-force ratios were recorded at every 0.5 mm of deactivation from 5 mm of the initial activation, in an interbracket distance of 23 mm. The horizontal forces decreased gradually among the groups (p< 0.001) and the moments showed a significant and slow decrease over time among the groups (p< 0.001). All groups produced similar M/F ratios (p= 0.532), with no influence of time. The TLSs preactivated by curvature suffered a gradual deformation over time, which affected the force system, specifically the moments, which affected the horizontal forces produced.

  11. Measurement of electron density transients in pulsed RF discharges using a frequency boxcar hairpin probe

    NASA Astrophysics Data System (ADS)

    Peterson, David; Coumou, David; Shannon, Steven

    2015-11-01

    Time resolved electron density measurements in pulsed RF discharges are shown using a hairpin resonance probe using low cost electronics, on par with normal Langmuir probe boxcar mode operation. Time resolution of 10 microseconds has been demonstrated. A signal generator produces the applied microwave frequency; the reflected waveform is passed through a directional coupler and filtered to remove the RF component. The signal is heterodyned with a frequency mixer and rectified to produce a DC signal read by an oscilloscope. At certain points during the pulse, the plasma density is such that the applied frequency is the same as the resonance frequency of the probe/plasma system, creating reflected signal dips. The applied microwave frequency is shifted in small increments in a frequency boxcar routine to determine the density as a function of time. A dc sheath correction is applied for the grounded probe, producing low cost, high fidelity, and highly reproducible electron density measurements. The measurements are made in both inductively and capacitively coupled systems, the latter driven by multiple frequencies where a subset of these frequencies are pulsed. Measurements are compared to previous published results, time resolved OES, and in-line measurement of plasma impedance. This work is supported by the NSF DOE partnership on plasma science, the NSF GOALI program, and MKS Instruments.

  12. Gas-Dynamic Designing of the Exhaust System for the Air Brake

    NASA Astrophysics Data System (ADS)

    Novikova, Yu; Goriachkin, E.; Volkov, A.

    2018-01-01

    Each gas turbine engine is tested some times during the life-cycle. The test equipment includes the air brake that utilizes the power produced by the gas turbine engine. In actual conditions, the outlet pressure of the air brake does not change and is equal to atmospheric pressure. For this reason, for the air brake work it is necessary to design the special exhaust system. Mission of the exhaust system is to provide the required level of backpressure at the outlet of the air brake. The backpressure is required for the required power utilization by the air brake (the air brake operation in the required points on the performance curves). The paper is described the development of the gas dynamic canal, designing outlet guide vane and the creation of a unified exhaust system for the air brake. Using a unified exhaust system involves moving the operating point to the performance curve further away from the calculated point. However, the applying of one exhaust system instead of two will significantly reduce the cash and time costs.

  13. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    PubMed

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-07-28

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.

  14. Adaptive Neuro-Fuzzy Modeling of UH-60A Pilot Vibration

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi; Malki, Heidar A.; Langari, Reza

    2003-01-01

    Adaptive neuro-fuzzy relationships have been developed to model the UH-60A Black Hawk pilot floor vertical vibration. A 200 point database that approximates the entire UH-60A helicopter flight envelope is used for training and testing purposes. The NASA/Army Airloads Program flight test database was the source of the 200 point database. The present study is conducted in two parts. The first part involves level flight conditions and the second part involves the entire (200 point) database including maneuver conditions. The results show that a neuro-fuzzy model can successfully predict the pilot vibration. Also, it is found that the training phase of this neuro-fuzzy model takes only two or three iterations to converge for most cases. Thus, the proposed approach produces a potentially viable model for real-time implementation.

  15. Anti-pointing is mediated by a perceptual bias of target location in left and right visual space.

    PubMed

    Heath, Matthew; Maraj, Anika; Gradkowski, Ashlee; Binsted, Gordon

    2009-01-01

    We sought to determine whether mirror-symmetrical limb movements (so-called anti-pointing) elicit a pattern of endpoint bias commensurate with perceptual judgments. In particular, we examined whether asymmetries related to the perceptual over- and under-estimation of target extent in respective left and right visual space impacts the trajectories of anti-pointing. In Experiment 1, participants completed direct (i.e. pro-pointing) and mirror-symmetrical (i.e. anti-pointing) responses to targets in left and right visual space with their right hand. In line with the anti-saccade literature, anti-pointing yielded longer reaction times than pro-pointing: a result suggesting increased top-down processing for the sensorimotor transformations underlying a mirror-symmetrical response. Most interestingly, pro-pointing yielded comparable endpoint accuracy in left and right visual space; however, anti-pointing produced an under- and overshooting bias in respective left and right visual space. In Experiment 2, we replicated the findings from Experiment 1 and further demonstrate that the endpoint bias of anti-pointing is independent of the reaching limb (i.e. left vs. right hand) and between-task differences in saccadic drive. We thus propose that the visual field-specific endpoint bias observed here is related to the cognitive (i.e. top-down) nature of anti-pointing and the corollary use of visuo-perceptual networks to support the sensorimotor transformations underlying such actions.

  16. Water ring-bouncing on repellent singularities.

    PubMed

    Chantelot, Pierre; Mazloomi Moqaddam, Ali; Gauthier, Anaïs; Chikatamarla, Shyam S; Clanet, Christophe; Karlin, Ilya V; Quéré, David

    2018-03-28

    Texturing a flat superhydrophobic substrate with point-like superhydrophobic macrotextures of the same repellency makes impacting water droplets take off as rings, which leads to shorter bouncing times than on a flat substrate. We investigate the contact time reduction on such elementary macrotextures through experiment and simulations. We understand the observations by decomposing the impacting drop reshaped by the defect into sub-units (or blobs) whose size is fixed by the liquid ring width. We test the blob picture by looking at the reduction of contact time for off-centered impacts and for impacts in grooves that produce liquid ribbons where the blob size is fixed by the width of the channel.

  17. High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis

    USGS Publications Warehouse

    Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher

    2015-01-01

    Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87  m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2  cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.

  18. Quantification and Compensation of Eddy-Current-Induced Magnetic Field Gradients

    PubMed Central

    Spees, William M.; Buhl, Niels; Sun, Peng; Ackerman, Joseph J.H.; Neil, Jeffrey J.; Garbow, Joel R.

    2011-01-01

    Two robust techniques for quantification and compensation of eddy-current-induced magnetic-field gradients and static magnetic-field shifts (ΔB0) in MRI systems are described. Purpose-built 1-D or 6-point phantoms are employed. Both procedures involve measuring the effects of a prior magnetic-field-gradient test pulse on the phantom’s free induction decay (FID). Phantom-specific analysis of the resulting FID data produces estimates of the time-dependent, eddy-current-induced magnetic field gradient(s) and ΔB0 shift. Using Bayesian methods, the time dependencies of the eddy-current-induced decays are modeled as sums of exponentially decaying components, each defined by an amplitude and time constant. These amplitudes and time constants are employed to adjust the scanner’s gradient pre-emphasis unit and eliminate undesirable eddy-current effects. Measurement with the six-point sample phantom allows for simultaneous, direct estimation of both on-axis and cross-term eddy-current-induced gradients. The two methods are demonstrated and validated on several MRI systems with actively-shielded gradient coil sets. PMID:21764614

  19. Quantification and compensation of eddy-current-induced magnetic-field gradients.

    PubMed

    Spees, William M; Buhl, Niels; Sun, Peng; Ackerman, Joseph J H; Neil, Jeffrey J; Garbow, Joel R

    2011-09-01

    Two robust techniques for quantification and compensation of eddy-current-induced magnetic-field gradients and static magnetic-field shifts (ΔB0) in MRI systems are described. Purpose-built 1-D or six-point phantoms are employed. Both procedures involve measuring the effects of a prior magnetic-field-gradient test pulse on the phantom's free induction decay (FID). Phantom-specific analysis of the resulting FID data produces estimates of the time-dependent, eddy-current-induced magnetic field gradient(s) and ΔB0 shift. Using Bayesian methods, the time dependencies of the eddy-current-induced decays are modeled as sums of exponentially decaying components, each defined by an amplitude and time constant. These amplitudes and time constants are employed to adjust the scanner's gradient pre-emphasis unit and eliminate undesirable eddy-current effects. Measurement with the six-point sample phantom allows for simultaneous, direct estimation of both on-axis and cross-term eddy-current-induced gradients. The two methods are demonstrated and validated on several MRI systems with actively-shielded gradient coil sets. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Experiments in Wave Record Analysis.

    DTIC Science & Technology

    1980-09-01

    manipulation of wave records in digital form to produce a power density spectrum (PDS) with great efficiency. The PDS gives a presentation of the...instantaneous surface elevation digital points (the zero level reference). The individual period, Ti, was taken as the time difference between two successive...CONCLUSIONS This thesis presents the results of experiments in the analysis of ocean wave records. For this purpose 19 digitized records obtained from a wave

  1. TIMING CIRCUIT

    DOEpatents

    Heyd, J.W.

    1959-07-14

    An electronic circuit is described for precisely controlling the power delivered to a load from an a-c source, and is particularly useful as a welder timer. The power is delivered in uniform pulses, produced by a thyratron, the number of pulses being controlled by a one-shot multivibrator. The starting pulse is synchronized with the a-c line frequency so that each multivlbrator cycle begins at about the same point in the a-c cycle.

  2. Role of Inflammation in MPTP-Induced Dopaminergic Neuronal Death

    DTIC Science & Technology

    2008-12-01

    treated mouse . We found that indeed both microglia and astrocytes are activated in the SNpc, that certain enzymes, such as NADPH oxidase and...different time points in the MPTP mouse model of PD using both normal and NADPH oxidase -deficient mice was the plan. This included assessing...superoxide radical can be produced in several different ways. First of all, DA itself is metabolized by monoamine oxidase (MAO), an outer

  3. Cellular interactions with bacterial cellulose: Polycaprolactone nanofibrous scaffolds produced by a portable electrohydrodynamic gun for point-of-need wound dressing.

    PubMed

    Aydogdu, Mehmet Onur; Altun, Esra; Crabbe-Mann, Maryam; Brako, Francis; Koc, Fatma; Ozen, Gunes; Kuruca, Serap Erdem; Edirisinghe, Ursula; Luo, C J; Gunduz, Oguzhan; Edirisinghe, Mohan

    2018-05-27

    Electrospun nanofibrous scaffolds are promising regenerative wound dressing options but have yet to be widely used in practice. The challenge is that nanofibre productions rely on bench-top apparatuses, and the delicate product integrity is hard to preserve before reaching the point of need. Timing is critically important to wound healing. The purpose of this investigation is to produce novel nanofibrous scaffolds using a portable, hand-held "gun", which enables production at the wound site in a time-dependent fashion, thereby preserving product integrity. We select bacterial cellulose, a natural hydrophilic biopolymer, and polycaprolactone, a synthetic hydrophobic polymer, to generate composite nanofibres that can tune the scaffold hydrophilicity, which strongly affects cell proliferation. Composite scaffolds made of 8 different ratios of bacterial cellulose and polycaprolactone were successfully electrospun. The morphological features and cell-scaffold interactions were analysed using scanning electron microscopy. The biocompatibility was studied using Saos-2 cell viability test. The scaffolds were found to show good biocompatibility and allow different proliferation rates that varied with the composition of the scaffolds. A nanofibrous dressing that can be accurately moulded and standardised via the portable technique is advantageous for wound healing in practicality and in its consistency through mass production. © 2018 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  4. A Factorial Analysis Study on Enzymatic Hydrolysis of Fiber Pressed Oil Palm Frond for Bioethanol Production

    NASA Astrophysics Data System (ADS)

    Hashim, F. S.; Yussof, H. W.; Zahari, M. A. K. M.; Illias, R. M.; Rahman, R. A.

    2016-03-01

    Different technologies have been developed to for the conversion of lignocellulosic biomass to suitable fermentation substrates for bioethanol production. The enzymatic conversion of cellulose seems to be the most promising technology as it is highly specific and does not produce substantial amounts of unwanted byproducts. The effects of agitation speed, enzyme loading, temperature, pH and reaction time on the conversion of glucose from fiber pressed oil palm frond (FPOPF) for bioethanol production were screened by statistical analysis using response surface methodology (RSM). A half fraction two-level factorial analysis with five factors was selected for the experimental design to determine the best enzymatic conditions that produce maximum amount of glucose. FPOPF was pre-treated with alkaline prior to enzymatic hydrolysis. The enzymatic hydrolysis was performed using a commercial enzyme Cellic CTec2. From this study, the highest yield of glucose concentration was 9.736 g/L at 72 hours reaction time at 35 °C, pH 5.6, and 1.5% (w/v) of enzyme loading. The model obtained was significant with p-value <0.0001. It is suggested that this model had a maximum point which is likely to be the optimum point and possible for the optimization process.

  5. Nonlinear Dynamic Models in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.

  6. New techniques to measure cliff change from historical oblique aerial photographs and structure-from-motion photogrammetry

    USGS Publications Warehouse

    Warrick, Jonathan; Ritchie, Andy; Adelman, Gabrielle; Adelman, Ken; Limber, Patrick W.

    2017-01-01

    Oblique aerial photograph surveys are commonly used to document coastal landscapes. Here it is shown that adequate overlap may exist in these photographic records to develop topographic models with Structure-from-Motion (SfM) photogrammetric techniques. Using photographs of Fort Funston, California, from the California Coastal Records Project, imagery were combined with ground control points in a four-dimensional analysis that produced topographic point clouds of the study area’s cliffs for 5 years spanning 2002 to 2010. Uncertainty was assessed by comparing point clouds with airborne LIDAR data, and these uncertainties were related to the number and spatial distribution of ground control points used in the SfM analyses. With six or more ground control points, the root mean squared errors between the SfM and LIDAR data were less than 0.30 m (minimum 1⁄4 0.18 m), and the mean systematic error was less than 0.10 m. The SfM results had several benefits over traditional airborne LIDAR in that they included point coverage on vertical- to-overhanging sections of the cliff and resulted in 10–100 times greater point densities. Time series of the SfM results revealed topographic changes, including landslides, rock falls, and the erosion of landslide talus along the Fort Funston beach. Thus, it was concluded that SfM photogrammetric techniques with historical oblique photographs allow for the extraction of useful quantitative information for mapping coastal topography and measuring coastal change. The new techniques presented here are likely applicable to many photograph collections and problems in the earth sciences.

  7. Striation patterns in serrated blade stabs to cartilage.

    PubMed

    Pounder, Derrick J; Reeder, Francesca D

    2011-05-20

    Stab wounds were made in porcine cartilage with 13 serrated knives, amongst which 4 were drop-point and 9 straight-spine; 9 coarsely serrated, 3 finely serrated and 1 with mixed pattern serrations. The walls of the stab tracks were cast with dental impression material, and the casts photographed together with the knife blades for comparison. All 13 serrated blades produced an "irregularly regular" pattern of striations on cartilage in all stabbings. Unusual and distinctive blade serration patterns produced equally distinctive wound striation patterns. A reference collection of striation patterns and corresponding blades might prove useful for striation pattern analysis. Drop-point blades produced similar striations to straight-spine blades except that the striations were not parallel but rather fan-shaped, converging towards the wound exit. The fan-shaped striation pattern characteristic of drop-point blades is explained by the initial lateral movement of the blade through the cartilage imposed by the presence of the drop point shape. It appears that the greater the overall angle of the drop point, the shorter the blade length over which the drop point occurs, and the closer the first serration is to the knife tip, the more obvious is the fan-shaped pattern. We anticipate that micro-irregularities producing individualising characteristics in non-serrated drop point blades, provided they were located at the tip opposite the drop point, should also show a fan-shaped pattern indicative of a drop point blade. The examination of the walls of stab wounds to cartilage represents an under-utilised source of forensic information to assist in knife identification. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  8. The role of photographic parameters in laser speckle or particle image displacement velocimetry

    NASA Technical Reports Server (NTRS)

    Lourenco, L.; Krothapalli, A.

    1987-01-01

    The parameters involved in obtaining the multiple exposure photographs in the laser speckle velocimetry method (to record the light scattering by the seeding particles) were optimized. The effects of the type, concentration, and dimensions of the tracer, the exposure conditions (time between exposures, exposure time, and number of exposures), and the sensitivity and resolution of the film on the quality of the final results were investigated, photographing an experimental flow behind an impulsively started circular cylinder. The velocity data were acquired by digital processing of Young's fringes, produced by point-by-point scanning of a photographic negative. Using the optimal photographing conditions, the errors involved in the estimation of the fringe angle and spacing were of the order of 1 percent for the spacing and +/1 deg for the fringe orientation. The resulting accuracy in the velocity was of the order of 2-3 percent of the maximum velocity in the field.

  9. Consultation sequencing of a hospital with multiple service points using genetic programming

    NASA Astrophysics Data System (ADS)

    Morikawa, Katsumi; Takahashi, Katsuhiko; Nagasawa, Keisuke

    2018-07-01

    A hospital with one consultation room operated by a physician and several examination rooms is investigated. Scheduled patients and walk-ins arrive at the hospital, each patient goes to the consultation room first, and some of them visit other service points before consulting the physician again. The objective function consists of the sum of three weighted average waiting times. The problem of sequencing patients for consultation is focused. To alleviate the stress of waiting, the consultation sequence is displayed. A dispatching rule is used to decide the sequence, and best rules are explored by genetic programming (GP). The simulation experiments indicate that the rules produced by GP can be reduced to simple permutations of queues, and the best permutation depends on the weight used in the objective function. This implies that a balanced allocation of waiting times can be achieved by ordering the priority among three queues.

  10. Development and Positioning Accuracy Assessment of Single-Frequency Precise Point Positioning Algorithms by Combining GPS Code-Pseudorange Measurements with Real-Time SSR Corrections

    PubMed Central

    Kim, Miso; Park, Kwan-Dong

    2017-01-01

    We have developed a suite of real-time precise point positioning programs to process GPS pseudorange observables, and validated their performance through static and kinematic positioning tests. To correct inaccurate broadcast orbits and clocks, and account for signal delays occurring from the ionosphere and troposphere, we applied State Space Representation (SSR) error corrections provided by the Seoul Broadcasting System (SBS) in South Korea. Site displacements due to solid earth tide loading are also considered for the purpose of improving the positioning accuracy, particularly in the height direction. When the developed algorithm was tested under static positioning, Kalman-filtered solutions produced a root-mean-square error (RMSE) of 0.32 and 0.40 m in the horizontal and vertical directions, respectively. For the moving platform, the RMSE was found to be 0.53 and 0.69 m in the horizontal and vertical directions. PMID:28598403

  11. Time Resolved Near Field Optical Microscopy

    NASA Astrophysics Data System (ADS)

    Stark, J. B.

    1996-03-01

    We use broadband pulses to image the carrier dynamics of semiconductor microstructures on a 150 nm spatial scale, with a time resolution of 60 femtoseconds. Etched disks of GaAs/AlGaAs multiple quantum well material, 10 microns in diameter, are excited with a 30 fs pump from a Ti:Sapphire laser, and probed using a near-field optical microscope. The nonlinear transmission of the microdisks is measured using a double-modulation technique, sensitive to transmission changes of 0.0005 within a 150 nm diameter spot on the sample. This spot is scanned to produce an image of the sample. The nonlinear response is produced by the occupation of phase space by the excited distribution. Images of this evolving distribution are collected at time intervals following excitation, measuring the relaxation of carriers at each point in the microdisk. The resulting data can be viewed as a movie of the carrier dynamics of nonequilibrium distributions in excited semiconductor structures. Work done in collaboration with U. Mohideen and R. E. Slusher.

  12. Use of TD-GC–TOF-MS to assess volatile composition during post-harvest storage in seven accessions of rocket salad (Eruca sativa)

    PubMed Central

    Bell, Luke; Spadafora, Natasha D.; Müller, Carsten T.; Wagstaff, Carol; Rogers, Hilary J.

    2016-01-01

    An important step in breeding for nutritionally enhanced varieties is determining the effects of the post-harvest supply chain on phytochemicals and the changes in VOCs produced over time. TD-GC–TOF-MS was used and a technique for the extraction of VOCs from the headspace using portable tubes is described. Forty-two compounds were detected; 39 were identified by comparison to NIST libraries. Thirty-five compounds had not been previously reported in Eruca sativa. Seven accessions were assessed for changes in headspace VOCs over 7 days. Relative amounts of VOCs across 3 time points were significantly different – isothiocyanate-containing molecules being abundant on ‘Day 0’. Each accession showed differences in proportions/types of volatiles produced on each day. PCA revealed a separation of VOC profiles according to the day of sampling. Changes in VOC profiles over time could provide a tool for assessment of shelf life. PMID:26471601

  13. A new in vivo animal model to create intervertebral disc degeneration characterized by MRI, radiography, CT/discogram, biochemistry, and histology.

    PubMed

    Zhou, HaoWei; Hou, ShuXun; Shang, WeiLin; Wu, WenWen; Cheng, Yao; Mei, Fang; Peng, BaoGan

    2007-04-15

    A new in vivo sheep model was developed that produced disc degeneration through the injection of 5-bromodeoxyuridine (BrdU) into the intervertebral disc. This process was studied using magnetic resonance imaging (MRI), radiography, CT/discogram, histology, and biochemistry. To develop a sheep model of intervertebral disc degeneration that more faithfully mimics the pathologic hallmarks of human intervertebral disc degeneration. Recent studies have shown age-related alterations in proteoglycan structure and organization in human intervertebral discs. An animal model that involves the use of age-related changes in disc cells can be beneficial over other more invasive degenerative models that involves directly damaging the matrix of disc tissue. Twelve sheep were injected with BrdU or vehicle (phosphate-buffered saline) into the central region of separate lumbar discs. Intact discs were used as controls. At the 2-, 6-, 10-, and 14-week time points, discs underwent MRI, radiography, histology, and biochemical analyses. A CT/discogram study was performed at the 14-week time point. MRI demonstrated a progressive loss of T2-weighted signal intensity at BrdU-injected discs over the 14-week study period. Radiograph findings included osteophyte and disc space narrowing formed by 10 weeks post-BrdU treatment. CT discography demonstrated internal disc disruption in several BrdU-treated discs at the 14-week time point. Histology showed a progressive loss of the normal architecture and cell density of discs from the 2-week time point to the 14-week time point. A progressive loss of cell proliferation capacity, water content, and proteoglycans was also documented. BrdU injection into the central region of sheep discs resulted in degeneration of intervertebral discs. This progressive, degenerative process was confirmed using MRI, histology, and by observing changes in biochemistry. Degeneration occurred in a manner that was similar to that observed in human disc degeneration.

  14. Influence of temperature and reaction time on the conversion of polystyrene waste to pyrolysis liquid oil.

    PubMed

    Miandad, R; Nizami, A S; Rehan, M; Barakat, M A; Khan, M I; Mustafa, A; Ismail, I M I; Murphy, J D

    2016-12-01

    This paper aims to investigate the effect of temperature and reaction time on the yield and quality of liquid oil produced from a pyrolysis process. Polystyrene (PS) type plastic waste was used as a feedstock in a small pilot scale batch pyrolysis reactor. At 400°C with a reaction time of 75min, the gas yield was 8% by mass, the char yield was 16% by mass, while the liquid oil yield was 76% by mass. Raising the temperature to 450°C increased the gas production to 13% by mass, reduced the char production to 6.2% and increased the liquid oil yield to 80.8% by mass. The optimum temperature and reaction time was found to be 450°C and 75min. The liquid oil at optimum conditions had a dynamic viscosity of 1.77mPas, kinematic viscosity of 1.92cSt, a density of 0.92g/cm 3 , a pour point of -60°C, a freezing point of -64°C, a flash point of 30.2°C and a high heating value (HHV) of 41.6MJ/kg this is similar to conventional diesel. The gas chromatography with mass spectrophotometry (GC-MS) analysis showed that liquid oil contains mainly styrene (48%), toluene (26%) and ethyl-benzene (21%) compounds. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A comparison of three-dimensional nonequilibrium solution algorithms applied to hypersonic flows with stiff chemical source terms

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Venkatapathy, Ethiraj

    1993-01-01

    Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  16. GPU surface extraction using the closest point embedding

    NASA Astrophysics Data System (ADS)

    Kim, Mark; Hansen, Charles

    2015-01-01

    Isosurface extraction is a fundamental technique used for both surface reconstruction and mesh generation. One method to extract well-formed isosurfaces is a particle system; unfortunately, particle systems can be slow. In this paper, we introduce an enhanced parallel particle system that uses the closest point embedding as the surface representation to speedup the particle system for isosurface extraction. The closest point embedding is used in the Closest Point Method (CPM), a technique that uses a standard three dimensional numerical PDE solver on two dimensional embedded surfaces. To fully take advantage of the closest point embedding, it is coupled with a Barnes-Hut tree code on the GPU. This new technique produces well-formed, conformal unstructured triangular and tetrahedral meshes from labeled multi-material volume datasets. Further, this new parallel implementation of the particle system is faster than any known methods for conformal multi-material mesh extraction. The resulting speed-ups gained in this implementation can reduce the time from labeled data to mesh from hours to minutes and benefits users, such as bioengineers, who employ triangular and tetrahedral meshes

  17. Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Venkatapathy, Ethiraj

    1995-01-01

    Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  18. Detection and modeling of the acoustic perturbation produced by the launch of the Space Shuttle using the Global Positioning System

    NASA Astrophysics Data System (ADS)

    Bowling, T. J.; Calais, E.; Dautermann, T.

    2010-12-01

    Rocket launches are known to produce infrasonic pressure waves that propagate into the ionosphere where coupling between electrons and neutral particles induces fluctuations in ionospheric electron density observable in GPS measurements. We have detected ionospheric perturbations following the launch of space shuttle Atlantis on 11 May 2009 using an array of continually operating GPS stations across the Southeastern coast of the United States and in the Caribbean. Detections are prominent to the south of the westward shuttle trajectory in the area of maximum coupling between the acoustic wave and Earth’s magnetic field, move at speeds consistent with the speed of sound, and show coherency between stations covering a large geographic range. We model the perturbation as an explosive source located at the point of closest approach between the shuttle path and each sub-ionospheric point. The neutral pressure wave is propagated using ray tracing, resultant changes in electron density are calculated at points of intersection between rays and satellite-to-reciever line-of-sight, and synthetic integrated electron content values are derived. Arrival times of the observed and synthesized waveforms match closely, with discrepancies related to errors in the apriori sound speed model used for ray tracing. Current work includes the estimation of source location and energy.

  19. A scientific and statistical analysis of accelerated aging for pharmaceuticals. Part 1: accuracy of fitting methods.

    PubMed

    Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L

    2014-10-01

    Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  20. Acquiring Research-grade ALSM Data in the Commercial Marketplace

    NASA Astrophysics Data System (ADS)

    Haugerud, R. A.; Harding, D. J.; Latypov, D.; Martinez, D.; Routh, S.; Ziegler, J.

    2003-12-01

    The Puget Sound Lidar Consortium, working with TerraPoint, LLC, has procured a large volume of ALSM (topographic lidar) data for scientific research. Research-grade ALSM data can be characterized by their completeness, density, and accuracy. Complete data include-at a minimum-X, Y, Z, time, and classification (ground, vegetation, structure, blunder) for each laser reflection. Off-nadir angle and return number for multiple returns are also useful. We began with a pulse density of 1/sq m, and after limited experiments still find this density satisfactory in the dense second-growth forests of western Washington. Lower pulse densities would have produced unacceptably limited sampling in forested areas and aliased some topographic features. Higher pulse densities do not produce markedly better topographic models, in part because of limitations of reproducibility between the overlapping survey swaths used to achieve higher density. Our experience in a variety of forest types demonstrates that the fraction of pulses that produce ground returns varies with vegetation cover, laser beam divergence, laser power, and detector sensitivity, but have not quantified this relationship. The most significant operational limits on vertical accuracy of ALSM appear to be instrument calibration and the accuracy with which returns are classified as ground or vegetation. TerraPoint has recently implemented in-situ calibration using overlapping swaths (Latypov and Zosse, 2002, see http://www.terrapoint.com/News_damirACSM_ASPRS2002.html). On the consumer side, we routinely perform a similar overlap analysis to produce maps of relative Z error between swaths; we find that in bare, low-slope regions the in-situ calibration has reduced this internal Z error to 6-10 cm RMSE. Comparison with independent ground control points commonly illuminates inconsistencies in how GPS heights have been reduced to orthometric heights. Once these inconsistencies are resolved, it appears that the internal errors are the bulk of the error of the survey. The error maps suggest that with in-situ calibration, minor time-varying errors with a period of circa 1 sec are the largest remaining source of survey error. For forested terrain, limited ground penetration and errors in return classification can severely limit the accuracy of resulting topographic models. Initial work by Haugerud and Harding demonstrated the feasibility of fully-automatic return classification; however, TerraPoint has found that better results can be obtained more effectively with 3rd-party classification software that allows a mix of automated routines and human intervention. Our relationship has been evolving since early 2000. Important aspects of this relationship include close communication between data producer and consumer, a willingness to learn from each other, significant technical expertise and resources on the consumer side, and continued refinement of achievable, quantitative performance and accuracy specifications. Most recently we have instituted a slope-dependent Z accuracy specification that TerraPoint first developed as a heuristic for surveying mountainous terrain in Switzerland. We are now working on quantifying the internal consistency of topographic models in forested areas, using a variant of overlap analysis, and standards for the spatial distribution of internal errors.

  1. Evaluation of a time efficient immunization strategy for anti-PAH antibody development

    PubMed Central

    Li, Xin; Kaattari, Stephen L.; Vogelbein, Mary Ann; Unger, Michael A.

    2016-01-01

    The development of monoclonal antibodies (mAb) with affinity to small molecules can be a time-consuming process. To evaluate shortening the time for mAb production, we examined mouse antisera at different time points post-immunization to measure titer and to evaluate the affinity to the immunogen PBA (pyrene butyric acid). Fusions were also conducted temporally to evaluate antibody production success at various time periods. We produced anti-PBA antibodies 7 weeks post-immunization and selected for anti-PAH reactivity during the hybridoma screening process. Moreover, there were no obvious sensitivity differences relative to antibodies screened from a more traditional 18 week schedule. Our results demonstrate a more time efficient immunization strategy for anti-PAH antibody development that may be applied to other small molecules. PMID:27282486

  2. Predictable communities of soil bacteria in relation to nutrient concentration and successional stage in a laboratory culture experiment.

    PubMed

    Song, Woojin; Kim, Mincheol; Tripathi, Binu M; Kim, Hyoki; Adams, Jonathan M

    2016-06-01

    It is difficult to understand the processes that structure immensely complex bacterial communities in the soil environment, necessitating a simplifying experimental approach. Here, we set up a microcosm culturing experiment with soil bacteria, at a range of nutrient concentrations, and compared these over time to understand the relationship between soil bacterial community structure and time/nutrient concentration. DNA from each replicate was analysed using HiSeq2000 Illumina sequencing of the 16S rRNA gene. We found that each nutrient treatment, and each time point during the experiment, produces characteristic bacterial communities that occur predictably between replicates. It is clear that within the context of this experiment, many soil bacteria have distinct niches from one another, in terms of both nutrient concentration, and successional time point since a resource first became available. This fine niche differentiation may in part help to explain the coexistence of a diversity of bacteria in soils. In this experiment, we show that the unimodal relationship between nutrient concentration/time and species diversity often reported in communities of larger organisms is also evident in microbial communities. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.

  3. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    NASA Technical Reports Server (NTRS)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  4. Ask your doctor: the construction of smoking in advertising posters produced in 1946 and 2004.

    PubMed

    Street, Annette F

    2004-12-01

    This paper examines two full-page A3 poster advertisements in mass magazines produced at two time points over a 60-year period depicting smoking and its effects, with particular relation to lung cancer. Each poster represents the social and cultural milieu of its time. The writings of Foucault are used to explore the disciplinary technologies of sign systems as depicted in the two posters. The relationships between government, tobacco companies and drug companies and the technologies of production are examined with regard to the development of smoking cessation strategies. The technologies of power are associated with the constructions of risk and lifestyles. The technologies of the self locate smokers as culpable subjects responsible for their individual health. Finally, the meshing of these technologies places the doctor in the frame as "authoritative knower" and representative of expert systems.

  5. The effects of green kiwifruit combined with isoflavones on equol production, bone turnover and gut microflora in healthy postmenopausal women.

    PubMed

    Kruger, Marlena C; Middlemiss, Catherine; Katsumata, Shinichi; Tousen, Yuko; Ishimi, Yoshiko

    2018-01-01

    Isoflavone (daidzein and genistein) interventions in postmenopausal women have produced inconsistent skeletal benefits, partly due to population heterogeneity in daidzein metabolism to equol by enteric bacteria. This study assessed changes in microflora and bone turnover in response to isoflavone and ki-wifruit supplementation in New Zealand postmenopausal women. Healthy women 1-10 years post-menopause were randomly allocated to group A (n=16) or B (n=17) for a 16-week crossover trial. Two consecutive 6-week treatment periods had a 2-week lead-in period at intervention commencement and a 2-week washout period between treatments. Treatments prescribed either (1) daily isoflavone supplementation (50 mg/day aglycone daidzein and genistein) alone, or (2) with two green kiwifruit. At treatment baseline and end-point (four time points) the serum bone markers C Telopeptide of Type I collagen (CTx), undercarboxylated os-teocalcin (unOC), and serum and urinary daidzein and equol, were measured. Changes in gut microflora were monitored in a subgroup of the women. Equol producers made up 30% of this study population (equol producers n=10; non-equol producers n=23) with serum equol rising significantly in equol producers. Serum ucOC decreased by 15.5% (p<0.05) after the kiwifruit and isoflavone treatment. There were no changes in serum CTx or in the diversity of the gut microflora. 50 mg/day isoflavones did not reduce bone resorption but kiwifruit and isoflavone consumption decreased serum ucOC levels, possibly due to vitamin K1 and/or other bioactive components of green kiwifruit.

  6. Fluorescence technique for on-line monitoring of state of hydrogen-producing microorganisms

    DOEpatents

    Seibert, Michael [Lakewood, CO; Makarova, Valeriya [Golden, CO; Tsygankov, Anatoly A [Pushchino, RU; Rubin, Andrew B [Moscow, RU

    2007-06-12

    In situ fluorescence method to monitor state of sulfur-deprived algal culture's ability to produce H.sub.2 under sulfur depletion, comprising: a) providing sulfur-deprived algal culture; b) illuminating culture; c) measuring onset of H.sub.2 percentage in produced gas phase at multiple times to ascertain point immediately after anerobiosis to obtain H.sub.2 data as function of time; and d) determining any abrupt change in three in situ fluorescence parameters; i) increase in F.sub.t (steady-state level of chlorophyll fluorescence in light adapted cells); ii) decrease in F.sub.m', (maximal saturating light induced fluorescence level in light adapted cells); and iii) decrease in .DELTA.F/F.sub.m'=(F.sub.m'-F.sub.t)/F.sub.m' (calculated photochemical activity of photosystem II (PSII) signaling full reduction of plastoquinone pool between PSII and PSI, which indicates start of anaerobic conditions that induces synthesis of hydrogenase enzyme for subsequent H.sub.2 production that signal oxidation of plastoquinone pool asmain factor to regulate H.sub.2 under sulfur depletion.

  7. A new hydrodynamic analysis of double layers

    NASA Technical Reports Server (NTRS)

    Hora, Heinrich

    1987-01-01

    A genuine two-fluid model of plasmas with collisions permits the calculation of dynamic (not necessarily static) electric fields and double layers inside of plasmas including oscillations and damping. For the first time a macroscopic model for coupling of electromagnetic and Langmuir waves was achieved with realistic damping. Starting points were laser-produced plasmas showing very high dynamic electric fields in nonlinear force-produced cavitous and inverted double layers in agreement with experiments. Applications for any inhomogeneous plasma as in laboratory or in astrophysical plasmas can then be followed up by a transparent hydrodynamic description. Results are the rotation of plasmas in magnetic fields and a new second harmonic resonance, explanation of the measured inverted double layers, explanation of the observed density-independent, second harmonics emission from laser-produced plasmas, and a laser acceleration scheme by the very high fields of the double layers.

  8. Finding Dantzig Selectors with a Proximity Operator based Fixed-point Algorithm

    DTIC Science & Technology

    2014-11-01

    experiments showed that this method usually outperforms the method in [2] in terms of CPU time while producing solutions of comparable quality. The... method proposed in [19]. To alleviate the difficulty caused by the subprob- lem without a closed form solution , a linearized ADM was proposed for the...a closed form solution , but the β-related subproblem does not and is solved approximately by using the nonmonotone gradient method in [18]. The

  9. Pressure induced ageing of polymers

    NASA Technical Reports Server (NTRS)

    Emri, I.; Knauss, W. G.

    1988-01-01

    The nonlinearly viscoelastic response of an amorphous homopolymer is considered under aspects of time dependent free volume behavior. In contrast to linearly viscoelastic solids, this model couples shear and volume deformation through a shift function which influences the rate of molecular relaxation or creep. Sample computations produce all those qualitative features one observes normally in uniaxial tension including the rate dependent formation of a yield point as a consequence of the history of an imposed pressure.

  10. High Annular Resolution Stellar Interferometry.

    DTIC Science & Technology

    1985-07-31

    correlation is separable, C3 (Ax,At) = C1 (Ax) C2 (At), then the spatial structure and time evolution are uncorrelated and under these conditions one would...the following one. Reference 6 points out the bias obtained on the shape of the normalised spatial correlation function of dynamic speckle under the...MS student) 1982 H Daum (U1 assistant) 1981 C S ________ 20. 6. MUPID This appendix oontains oopies of the 20 publioations produced under this

  11. A Simulation on Organizational Communication Patterns During a Terrorist Attack

    DTIC Science & Technology

    2008-06-01

    and the Air Support Headquarters. The call is created at the time of attack, and it automatically includes a request for help. Reliability of...communication conditions. 2. Air Support call : This call is produced for just the Headquarters of Air Component, only in case of armed attacks. The request can...estimated speed of armored vehicles in combat areas (West-Point Organization, 2002). When a call for air support is received, an information

  12. The Hotel Industrys Role In Combatting Sex Trafficking

    DTIC Science & Technology

    2017-12-01

    expectations that society has of organizations at a given point in time.”10 Carroll posits that businesses must first, “produce goods and services that...reuse programs that offer guests the option to forego daily laundering services are now commonly used throughout the lodging industry to conserve...million in funding—which represents an increase of $5.9 million over FY2015—to 33 victim service providers.47 Government funding offers agencies and NGOs

  13. Evolutionary behaviour, trade-offs and cyclic and chaotic population dynamics.

    PubMed

    Hoyle, Andy; Bowers, Roger G; White, Andy

    2011-05-01

    Many studies of the evolution of life-history traits assume that the underlying population dynamical attractor is stable point equilibrium. However, evolutionary outcomes can change significantly in different circumstances. We present an analysis based on adaptive dynamics of a discrete-time demographic model involving a trade-off whose shape is also an important determinant of evolutionary behaviour. We derive an explicit expression for the fitness in the cyclic region and consequently present an adaptive dynamic analysis which is algebraic. We do this fully in the region of 2-cycles and (using a symbolic package) almost fully for 4-cycles. Simulations illustrate and verify our results. With equilibrium population dynamics, trade-offs with accelerating costs produce a continuously stable strategy (CSS) whereas trade-offs with decelerating costs produce a non-ES repellor. The transition to 2-cycles produces a discontinuous change: the appearance of an intermediate region in which branching points occur. The size of this region decreases as we move through the region of 2-cycles. There is a further discontinuous fall in the size of the branching region during the transition to 4-cycles. We extend our results numerically and with simulations to higher-period cycles and chaos. Simulations show that chaotic population dynamics can evolve from equilibrium and vice-versa.

  14. Simulation of turbulent shear flows at Stanford and NASA-Ames - What can we do and what have we learned?

    NASA Technical Reports Server (NTRS)

    Reynolds, W. C.

    1983-01-01

    The capabilities and limitations of large eddy simulation (LES) and full turbulence simulation (FTS) are outlined. It is pointed out that LES, although limited at the present time by the need for periodic boundary conditions, produces large-scale flow behavior in general agreement with experiments. What is more, FTS computations produce small-scale behavior that is consistent with available experiments. The importance of the development work being done on the National Aerodynamic Simulator is emphasized. Studies at present are limited to situations in which periodic boundary conditions can be applied on boundaries of the computational domain where the flow is turbulent.

  15. Digital SAR processing using a fast polynomial transform

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Lipes, R. G.; Butman, S. A.; Reed, I. S.; Rubin, A. L.

    1984-01-01

    A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network. Previously announced in STAR as N82-11295

  16. Cosmological perturbations of axion with a dynamical decay constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Takeshi; INFN, Sezione di Trieste,Via Bonomea 265, 34136 Trieste; Takahashi, Fuminobu

    2016-08-25

    A QCD axion with a time-dependent decay constant has been known to be able to accommodate high-scale inflation without producing topological defects or too large isocurvature perturbations on CMB scales. We point out that a dynamical decay constant also has the effect of enhancing the small-scale axion isocurvature perturbations. The enhanced axion perturbations can even exceed the periodicity of the axion potential, and thus lead to the formation of axionic domain walls. Unlike the well-studied axionic walls, the walls produced from the enhanced perturbations are not bounded by cosmic strings, and thus would overclose the universe independently of the numbermore » of degenerate vacua along the axion potential.« less

  17. Performance analysis of ‘Perturb and Observe’ and ‘Incremental Conductance’ MPPT algorithms for PV system

    NASA Astrophysics Data System (ADS)

    Lodhi, Ehtisham; Lodhi, Zeeshan; Noman Shafqat, Rana; Chen, Fieda

    2017-07-01

    Photovoltaic (PV) system usually employed The Maximum power point tracking (MPPT) techniques for increasing its efficiency. The performance of the PV system perhaps boosts by controlling at its apex point of power, in this way maximal power can be given to load. The proficiency of a PV system usually depends upon irradiance, temperature and array architecture. PV array shows a non-linear style for V-I curve and maximal power point on V-P curve also varies with changing environmental conditions. MPPT methods grantees that a PV module is regulated at reference voltage and to produce entire usage of the maximal output power. This paper gives analysis between two widely employed Perturb and Observe (P&O) and Incremental Conductance (INC) MPPT techniques. Their performance is evaluated and compared through theoretical analysis and digital simulation on the basis of response time and efficiency under varying irradiance and temperature condition using Matlab/Simulink.

  18. Brightness and magnetic evolution of solar coronal bright points

    NASA Astrophysics Data System (ADS)

    Ugarte Urra, Ignacio

    This thesis presents a study of the brightness and magnetic evolution of several Extreme ultraviolet (EUV) coronal bright points (hereafter BPs). The study was carried out using several instruments on board the Solar and Heliospheric Observatory, supported by the high resolution imaging from the Transition Region And Coronal Explorer. The results confirm that, down to 1" resolution, BPs are made of small loops with lengths of [approximate]6 Mm and cross-sections of ≈2 Mm. The loops are very dynamic, evolving in time scales as short as 1 - 2 minutes. This is reflected in a highly variable EUV response with fluctuations highly correlated in spectral lines at transition region temperatures, but not always at coronal temperatures. A wavelet analysis of the intensity variations reveals the existence of quasi-periodic oscillations with periods ranging 400--1000s, in the range of periods characteristic of the chromospheric network. The link between BPs and network bright points is discussed, as well as the interpretation of the oscillations in terms of global acoustic modes of closed magnetic structures. A comparison of the magnetic flux evolution of the magnetic polarities to the EUV flux changes is also presented. Throughout their lifetime, the intrinsic EUV emission of BPs is found to be dependent on the total magnetic flux of the polarities. In short time scales, co-spatial and co-temporal coronal images and magnetograms, reveal the signature of heating events that produce sudden EUV brightenings simultaneous to magnetic flux cancellations. This is interpreted in terms of magnetic reconnection events. Finally, a electron density study of six coronal bright points produces values of ≈1.6×10 9 cm -3 , closer to active region plasma than to quiet Sun. The analysis of a large coronal loop (half length of 72 Mm) introduces the discussion on the prospects of future plasma diagnostics of BPs with forthcoming solar missions.

  19. Real-Time GNSS Positioning with JPL's new GIPSYx Software

    NASA Astrophysics Data System (ADS)

    Bar-Sever, Y. E.

    2016-12-01

    The JPL Global Differential GPS (GDGPS) System is now producing real-time orbit and clock solutions for GPS, GLONASS, BeiDou, and Galileo. The operations are based on JPL's next generation geodetic analysis and data processing software, GIPSYx (also known at RTGx). We will examine the impact of the nascent GNSS constellations on real-time kinematic positioning for earthquake monitoring, and assess the marginal benefits from each constellation. We will discus the options for signal selection, inter-signal bias modeling, and estimation strategies in the context of real-time point positioning. We will provide a brief overview of the key features and attributes of GIPSYx. Finally we will describe the current natural hazard monitoring services from the GDGPS System.

  20. 40 CFR 63.761 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... means hydrocarbon (petroleum) liquid with an initial producing gas-to-oil ratio (GOR) less than 0.31... of hydrocarbon liquids or natural gas: after processing and/or treatment in the producing operations... point at which such liquids or natural gas enters a natural gas processing plant is a point of custody...

  1. 40 CFR 63.761 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... means hydrocarbon (petroleum) liquid with an initial producing gas-to-oil ratio (GOR) less than 0.31... of hydrocarbon liquids or natural gas: after processing and/or treatment in the producing operations... point at which such liquids or natural gas enters a natural gas processing plant is a point of custody...

  2. AN INVESTIGATION OF TIME LAG MAPS USING THREE-DIMENSIONAL SIMULATIONS OF HIGHLY STRATIFIED HEATING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winebarger, Amy R.; Lionello, Roberto; Downs, Cooper

    2016-11-10

    The location and frequency of coronal energy release provide a significant constraint on the coronal heating mechanism. The evolution of the intensity observed in coronal structures found from time lag analysis of Atmospheric Imaging Assembly (AIA) data has been used to argue that heating must occur sporadically. Recently, we have demonstrated that quasi-steady, highly stratified (footpoint) heating can produce results qualitatively consistent with the evolution of observed coronal structures. The goals of this paper are to demonstrate that time lag analysis of 3D simulations of footpoint heating are qualitatively consistent with time lag analysis of observations and to use themore » 3D simulations to further understand whether time lag analysis is a useful tool in defining the evolution of coronal structures. We find the time lag maps generated from simulated data are consistent with the observed time lag maps. We next investigate several example points. In some cases, the calculated time lag reflects the evolution of a unique loop along the line of sight, though there may be additional evolving structures along the line of sight. We confirm that using the multi-peak AIA channels can produce time lags that are difficult to interpret. We suggest using a different high temperature channel, such as an X-ray channel. Finally, we find that multiple evolving structures along the line of sight can produce time lags that do not represent the physical properties of any structure along the line of sight, although the cross-correlation coefficient of the lightcurves is high. Considering the projected geometry of the loops may reduce some of the line-of-sight confusion.« less

  3. Phase-shifting point diffraction interferometer

    DOEpatents

    Medecki, H.

    1998-11-10

    Disclosed is a point diffraction interferometer for evaluating the quality of a test optic. In operation, the point diffraction interferometer includes a source of radiation, the test optic, a beam divider, a reference wave pinhole located at an image plane downstream from the test optic, and a detector for detecting an interference pattern produced between a reference wave emitted by the pinhole and a test wave emitted from the test optic. The beam divider produces separate reference and test beams which focus at different laterally separated positions on the image plane. The reference wave pinhole is placed at a region of high intensity (e.g., the focal point) for the reference beam. This allows reference wave to be produced at a relatively high intensity. Also, the beam divider may include elements for phase shifting one or both of the reference and test beams. 8 figs.

  4. Phase-shifting point diffraction interferometer

    DOEpatents

    Medecki, Hector

    1998-01-01

    Disclosed is a point diffraction interferometer for evaluating the quality of a test optic. In operation, the point diffraction interferometer includes a source of radiation, the test optic, a beam divider, a reference wave pinhole located at an image plane downstream from the test optic, and a detector for detecting an interference pattern produced between a reference wave emitted by the pinhole and a test wave emitted from the test optic. The beam divider produces separate reference and test beams which focus at different laterally separated positions on the image plane. The reference wave pinhole is placed at a region of high intensity (e.g., the focal point) for the reference beam. This allows reference wave to be produced at a relatively high intensity. Also, the beam divider may include elements for phase shifting one or both of the reference and test beams.

  5. Development of Control System for Hydrolysis Crystallization Process

    NASA Astrophysics Data System (ADS)

    Wan, Feng; Shi, Xiao-Ming; Feng, Fang-Fang

    2016-05-01

    Sulfate method for producing titanium dioxide is commonly used in China, but the determination of crystallization time is artificially which leads to a big error and is harmful to the operators. In this paper a new method for determining crystallization time is proposed. The method adopts the red laser as the light source, uses the silicon photocell as reflection light receiving component, using optical fiber as the light transmission element, differential algorithm is adopted in the software to realize the determination of the crystallizing time. The experimental results show that the method can realize the determination of crystallization point automatically and accurately, can replace manual labor and protect the health of workers, can be applied to practice completely.

  6. A far-field non-reflecting boundary condition for two-dimensional wake flows

    NASA Technical Reports Server (NTRS)

    Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli

    1995-01-01

    Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.

  7. PLATSIM: An efficient linear simulation and analysis package for large-order flexible systems

    NASA Technical Reports Server (NTRS)

    Maghami, Periman; Kenny, Sean P.; Giesy, Daniel P.

    1995-01-01

    PLATSIM is a software package designed to provide efficient time and frequency domain analysis of large-order generic space platforms implemented with any linear time-invariant control system. Time domain analysis provides simulations of the overall spacecraft response levels due to either onboard or external disturbances. The time domain results can then be processed by the jitter analysis module to assess the spacecraft's pointing performance in a computationally efficient manner. The resulting jitter analysis algorithms have produced an increase in speed of several orders of magnitude over the brute force approach of sweeping minima and maxima. Frequency domain analysis produces frequency response functions for uncontrolled and controlled platform configurations. The latter represents an enabling technology for large-order flexible systems. PLATSIM uses a sparse matrix formulation for the spacecraft dynamics model which makes both the time and frequency domain operations quite efficient, particularly when a large number of modes are required to capture the true dynamics of the spacecraft. The package is written in MATLAB script language. A graphical user interface (GUI) is included in the PLATSIM software package. This GUI uses MATLAB's Handle graphics to provide a convenient way for setting simulation and analysis parameters.

  8. Optimisation techniques in vaginal cuff brachytherapy.

    PubMed

    Tuncel, N; Garipagaoglu, M; Kizildag, A U; Andic, F; Toy, A

    2009-11-01

    The aim of this study was to explore whether an in-house dosimetry protocol and optimisation method are able to produce a homogeneous dose distribution in the target volume, and how often optimisation is required in vaginal cuff brachytherapy. Treatment planning was carried out for 109 fractions in 33 patients who underwent high dose rate iridium-192 (Ir(192)) brachytherapy using Fletcher ovoids. Dose prescription and normalisation were performed to catheter-oriented lateral dose points (dps) within a range of 90-110% of the prescribed dose. The in-house vaginal apex point (Vk), alternative vaginal apex point (Vk'), International Commission on Radiation Units and Measurements (ICRU) rectal point (Rg) and bladder point (Bl) doses were calculated. Time-position optimisations were made considering dps, Vk and Rg doses. Keeping the Vk dose higher than 95% and the Rg dose less than 85% of the prescribed dose was intended. Target dose homogeneity, optimisation frequency and the relationship between prescribed dose, Vk, Vk', Rg and ovoid diameter were investigated. The mean target dose was 99+/-7.4% of the prescription dose. Optimisation was required in 92 out of 109 (83%) fractions. Ovoid diameter had a significant effect on Rg (p = 0.002), Vk (p = 0.018), Vk' (p = 0.034), minimum dps (p = 0.021) and maximum dps (p<0.001). Rg, Vk and Vk' doses with 2.5 cm diameter ovoids were significantly higher than with 2 cm and 1.5 cm ovoids. Catheter-oriented dose point normalisation provided a homogeneous dose distribution with a 99+/-7.4% mean dose within the target volume, requiring time-position optimisation.

  9. The evolution of risk perceptions related to bovine spongiform encephalopathy--Canadian consumer and producer behavior.

    PubMed

    Yang, Jun; Goddard, Ellen

    2011-01-01

    In this study the dynamics of risk perceptions related to bovine spongiform encephalopathy (BSE) held by Canadian consumers and cow-calf producers were evaluated. Since the first domestic case of BSE in 2003, Canadian consumers and cow-calf producers have needed to make decisions on whether or not their purchasing/production behavior should change. Such changes in their behavior may relate to their levels of risk perceptions about BSE, risk perceptions that may be evolving over time and be affected by BSE media information available. An econometric analysis of the behavior of consumers and cow-calf producers might identify the impacts of evolving BSE risk perceptions. Risk perceptions related to BSE are evaluated through observed market behavior, an approach that differs from traditional stated preference approaches to eliciting risk perceptions at a particular point in time. BSE risk perceptions may be specified following a Social Amplification of Risk Framework (SARF) derived from sociology, psychology, and economics. Based on the SARF, various quality and quantity indices related to BSE media information are used as explanatory variables in risk perception equations. Risk perceptions are approximated using a predictive difference approach as defined by Liu et al. (1998). Results showed that Canadian consumer and cow-calf producer risk perceptions related to BSE have been amplified or attenuated by both quantity and quality of BSE media information. Government policies on risk communications need to address the different roles of BSE information in Canadian consumers' and cow-calf producers' behavior.

  10. Correction of ultrasonic wave aberration with a time delay and amplitude filter.

    PubMed

    Måsøy, Svein-Erik; Johansen, Tonni F; Angelsen, Bjørn

    2003-04-01

    Two-dimensional simulations with propagation through two different heterogeneous human body wall models have been performed to analyze different correction filters for ultrasonic wave aberration due to forward wave propagation. The different models each produce most of the characteristic aberration effects such as phase aberration, relatively strong amplitude aberration, and waveform deformation. Simulations of wave propagation from a point source in the focus (60 mm) of a 20 mm transducer through the body wall models were performed. Center frequency of the pulse was 2.5 MHz. Corrections of the aberrations introduced by the two body wall models were evaluated with reference to the corrections obtained with the optimal filter: a generalized frequency-dependent phase and amplitude correction filter [Angelsen, Ultrasonic Imaging (Emantec, Norway, 2000), Vol. II]. Two correction filters were applied, a time delay filter, and a time delay and amplitude filter. Results showed that correction with a time delay filter produced substantial reduction of the aberration in both cases. A time delay and amplitude correction filter performed even better in both cases, and gave correction close to the ideal situation (no aberration). The results also indicated that the effect of the correction was very sensitive to the accuracy of the arrival time fluctuations estimate, i.e., the time delay correction filter.

  11. Mapping Cortical Morphology in Youth with Velo-Cardio-Facial (22q11.2 Deletion) Syndrome

    PubMed Central

    Kates, Wendy R.; Bansal, Ravi; Fremont, Wanda; Antshel, Kevin M.; Hao, Xuejun; Higgins, Anne Marie; Liu, Jun; Shprintzen, Robert J.; Peterson, Bradley S.

    2010-01-01

    Objective Velo-cardio-facial syndrome (VCFS; 22q11.2 deletion syndrome) represents one of the highest known risk factors for schizophrenia. Insofar as up to thirty percent of individuals with this genetic disorder develop schizophrenia, VCFS constitutes a unique, etiologically homogeneous model for understanding the pathogenesis of schizophrenia. Method Using a longitudinal, case-control design, we acquired anatomic magnetic resonance images to investigate both cross-sectional and longitudinal alterations in surface cortical morphology in a cohort of adolescents with VCFS and age-matched typical controls. All participants were scanned at two time points. Results Relative to controls, youth with VCFS exhibited alterations in inferior frontal, dorsal frontal, occipital, and cerebellar brain regions at both time points. We observed little change over time in surface morphology of either study group. However, within the VCFS group only, worsening psychosocial functioning over time was associated with Time 2 surface contractions in left middle and inferior temporal gyri. Further, prodromal symptoms at Time 2 were associated with surface contractions in left and right orbitofrontal, temporal and cerebellar regions, as well as surface protrusions of supramarginal gyrus. Conclusions These findings advance our understanding of cortical disturbances in VCFS that produce vulnerability for psychosis in this high risk population. PMID:21334567

  12. Printing line/space patterns on nonplanar substrates using a digital micromirror device-based point-array scanning technique

    NASA Astrophysics Data System (ADS)

    Kuo, Hung-Fei; Kao, Guan-Hsuan; Zhu, Liang-Xiu; Hung, Kuo-Shu; Lin, Yu-Hsin

    2018-02-01

    This study used a digital micromirror device (DMD) to produce point-array patterns and employed a self-developed optical system to define line-and-space patterns on nonplanar substrates. First, field tracing was employed to analyze the aerial images of the lithographic system, which comprised an optical system and the DMD. Multiobjective particle swarm optimization was then applied to determine the spot overlapping rate used. The objective functions were set to minimize linewidth and maximize image log slope, through which the dose of the exposure agent could be effectively controlled and the quality of the nonplanar lithography could be enhanced. Laser beams with 405-nm wavelength were employed as the light source. Silicon substrates coated with photoresist were placed on a nonplanar translation stage. The DMD was used to produce lithographic patterns, during which the parameters were analyzed and optimized. The optimal delay time-sequence combinations were used to scan images of the patterns. Finally, an exposure linewidth of less than 10 μm was successfully achieved using the nonplanar lithographic process.

  13. Blood Harmane (1-methyl-9h-pyrido[3,4-b]indole) Concentrations in Essential Tremor: Repeat Observation in Cases and Controls in New York

    PubMed Central

    Louis, Elan D.; Jiang, Wendy; Gerbin, Marina; Viner, Amanda S.; Factor-Litvak, Pam; Zheng, Wei

    2012-01-01

    Essential tremor (ET) is a widespread late-life neurological disease. Genetic and environmental factors are likely to play important etiological roles. Harmane (1-methyl-9H-pyrido[3,4-b]indole) is a potent tremor-producing neurotoxin. Previously, elevated blood harmane concentrations were demonstrated in ET cases compared to controls, but these observations were all been cross-sectional, assessing each subject at only one time point. Thus, no one has ever repeat-assayed blood harmane in the same subjects twice. Whether the observed case-control difference persists at a second time point, years later, is unknown. The current goal was to re-assess a sample of our ET cases and controls to determine whether blood harmane concentration remained elevated in ET at a second time point. Blood harmane concentrations were quantified by a well-established high performance liquid chromatography method in 63 ET cases and 70 controls. A mean of approximately 6 years elapsed between the initial and this subsequent blood harmane determination. The mean log blood harmane concentration was significantly higher in cases than controls (0.30 ± 0.61 g−10/ml vs. 0.08 ± 0.55 g−10/ml), and the median value in cases was double that of controls: 0.22 g−10/ml vs. 0.11 g−10/ml. The log blood harmane concentration was highest in cases with a family history of ET. Blood harmane concentration was elevated in ET cases compared to controls when re-assessed at a second time point several years later, indicating what seems to be a stable association between this environmental toxin and ET. PMID:22757671

  14. Blood harmane (1-methyl-9H-pyrido[3,4-b]indole) concentrations in essential tremor: repeat observation in cases and controls in New York.

    PubMed

    Louis, Elan D; Jiang, Wendy; Gerbin, Marina; Viner, Amanda S; Factor-Litvak, Pam; Zheng, Wei

    2012-01-01

    Essential tremor (ET) is a widespread late-life neurological disease. Genetic and environmental factors are likely to play important etiological roles. Harmane (1-methyl-9H-pyrido[3,4-b]indole) is a potent tremor-producing neurotoxin. Previously, elevated blood harmane concentrations were demonstrated in ET cases compared to controls, but these observations have all been cross-sectional, assessing each subject at only one time point. Thus, no one has ever repeat-assayed blood harmane in the same subjects twice. Whether the observed case-control difference persists at a second time point, years later, is unknown. The current goal was to reassess a sample of our ET cases and controls to determine whether blood harmane concentration remained elevated in ET at a second time point. Blood harmane concentrations were quantified by a well-established high-performance liquid chromatography method in 63 ET cases and 70 controls. A mean of approximately 6 yr elapsed between the initial and this subsequent blood harmane determination. The mean log blood harmane concentration was significantly higher in cases than controls (0.30 ± 0.61 g(-10)/ml versus 0.08 ± 0.55 g(-10)/ml), and the median value in cases was double that of controls: 0.22 g(-10)/ml versus 0.11 g(-10)/ml. The log blood harmane concentration was highest in cases with a family history of ET. Blood harmane concentration was elevated in ET cases compared to controls when reassessed at a second time point several years later, indicating what seems to be a stable association between this environmental toxin and ET.

  15. Visualization of instationary flows by particle traces

    NASA Astrophysics Data System (ADS)

    Raasch, S.

    An abstract on a study which represents a model of atmospheric flow output by computer movies is presented. The structure and evolution of the flow is visualized by starting weightless particles at the locations of the model grid points at distinct, equally spaced times. These particles are then only advected by the flow. In order to avoid useless accumulation of particles, they can be provided with a limited lifetime. Scalar quantities can be shown in addition to using color shaded contours as background information. A movie with several examples of atmospheric flows, for example convection in the atmospheric boundary layer, slope winds, land seabreeze and Kelvin-Helmholtz waves is presented. The simulations are performed by two dimensional and three dimensional nonhydrostatic, finite difference models. Graphics are produced by using the UNIRAS software and the graphic output is in form of CGM metafiles. The single frames are stored on an ABEKAS real time video disc and then transferred to a BETACAM-SP tape recorder. The graphic software is suitable to produce 2 dimensional pictures, for example only cross sections of three dimensional simulations can be made. To produce a movie of typically 90 seconds duration, the graphic software and the particle model need about 10 hours CPU time on a CCD CYBER 990 and the CGM metafile has a size of about 1.4 GByte.

  16. Enhanced Magnetic Properties of Nd15Fe77B8 Alloy Powders Produced by Melt-Spinning Technique

    NASA Astrophysics Data System (ADS)

    Öztürk, Sultan; İcin, Kürşat; Öztürk, Bülent; Topal, Uğur; Odabaşı, Hülya Kaftelen; Göbülük, Metin; Cora, Ömer Necati

    2017-10-01

    Rapidly solidified Nd15Fe77B8 alloy powders were produced by means of melt-spinning method in high-vacuum atmosphere to achieve improved magnetic and thermal properties. To this goal, a vacuum milling apparatus was designed and constructed to ball-mill the melt-spun powders in a surfactant active atmosphere. Various milling times were experimented to reveal the effect of the milling time on the mean particle size and other size-dependent properties such as magnetism and Curie temperature. Grain structure, cooling rate, and phase structure of the produced powders were also investigated. The Curie points shifted to higher temperatures from the ingot condition to surfactant active ball-milling and the values for Nd15Fe77B8 ingot alloy, melt-spun powders, and surfactant active ball-milled powders were 552 K, 595 K, and 604 K (279 °C, 322 °C, and 331 °C), respectively. It was noted that the surfactant active ball-milling process improved the magnetic and thermal properties of melt-spun Nd15Fe77B8 alloy powders. Compared to relevant literature, the coercivity of powders increased significantly with increasing milling time and decreasing in powder size. The coercivity value as high as 3427 kA m-1 was obtained.

  17. The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations

    NASA Astrophysics Data System (ADS)

    Orf, L.

    2017-12-01

    In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress extremely well. We observe that the overhead for compressing data with ZFP is low, and that compressing data in memory reduces the amount of memory overhead needed to store the virtual files before they are flushed to disk.

  18. Effect of the time interval from harvesting to the pre-drying step on natural fumonisin contamination in freshly harvested corn from the State of Parana, Brazil.

    PubMed

    Da Silva, M; Garcia, G T; Vizoni, E; Kawamura, O; Hirooka, E Y; Ono, E Y S

    2008-05-01

    Natural mycoflora and fumonisins were analysed in 490 samples of freshly harvested corn (Zea mays L.) (2003 and 2004 crops) collected at three points in the producing chain from the Northern region of Parana State, Brazil, and correlated to the time interval between the harvesting and the pre-drying step. The two crops showed a similar profile concerning the fungal frequency, and Fusarium sp. was the prevalent genera (100%) for the sampling sites from both crops. Fumonisins were detected in all samples from the three points of the producing chain (2003 and 2004 crops). The levels ranged from 0.11 to 15.32 microg g(-1)in field samples, from 0.16 to 15.90 microg g(-1)in reception samples, and from 0.02 to 18.78 microg g(-1)in pre-drying samples (2003 crop). Samples from the 2004 crop showed lower contamination and fumonisin levels ranged from 0.07 to 4.78 microg g(-1)in field samples, from 0.03 to 4.09 microg g(-1)in reception samples, and from 0.11 to 11.21 microg g(-1)in pre-drying samples. The mean fumonisin level increased gradually from < or = 5.0 to 19.0 microg g(-1)as the time interval between the harvesting and the pre-drying step increased from 3.22 to 8.89 h (2003 crop). The same profile was observed for samples from the 2004 crop. Fumonisin levels and the time interval (rho = 0.96) showed positive correlation (p < or = 0.05), indicating that delay in the drying process can increase fumonisin levels.

  19. Composition of growth factors and cytokines in lysates obtained from fresh versus stored pathogen-inactivated platelet units.

    PubMed

    Sellberg, Felix; Berglund, Erik; Ronaghi, Martin; Strandberg, Gabriel; Löf, Helena; Sommar, Pehr; Lubenow, Norbert; Knutson, Folke; Berglund, David

    2016-12-01

    Platelet lysate is a readily available source of growth factors, and other mediators, which has been used in a variety of clinical applications. However, the product remains poorly standardized and the present investigation evaluates the composition of platelet lysate obtained from either fresh or stored pathogen-inactivated platelet units. Platelet pooled units (n = 10) were obtained from healthy blood donors and tested according to standard procedures. All units were pathogen inactivated using amotosalen hydrochloride and UVA exposure. Platelet lysate was subsequently produced at two separate time-points, either from fresh platelet units or after 5 days of storage, by repeated freeze-thaw cycles. The following mediators were determined at each time-point: EGF, FGF-2, VEGF, IGF-1, PDGF-AB/BB, BMP-2, PF4, TGF-β isoform 1, IL-1β, IL-2, IL-6, IL-10, IL-12p70, 1L-17A, TNF-α, and IFN-γ. The concentration of growth factors and cytokines was affected by time in storage. Notably, TGF-β, PDGF-AB/BB, and PF4 showed an increase of 27.2% (p < 0.0001), 29.5% (p = 0.04) and 8.2% (p = 0.0004), respectively. A decrease was seen in the levels of IGF-1 and FGF-2 with 22% (p = 0.041) and 11% (p = 0.01), respectively. Cytokines were present only in very low concentrations and all other growth factors remained stable with time in storage. The composition of mediators in platelet lysate obtained from pathogen-inactivated platelet units differs when produced from fresh and stored platelet units, respectively. This underscores the need for further standardization and optimization of this important product, which potentially may influence the clinical effects. Copyright © 2016. Published by Elsevier Ltd.

  20. English speech sound development in preschool-aged children from bilingual English-Spanish environments.

    PubMed

    Gildersleeve-Neumann, Christina E; Kester, Ellen S; Davis, Barbara L; Peña, Elizabeth D

    2008-07-01

    English speech acquisition by typically developing 3- to 4-year-old children with monolingual English was compared to English speech acquisition by typically developing 3- to 4-year-old children with bilingual English-Spanish backgrounds. We predicted that exposure to Spanish would not affect the English phonetic inventory but would increase error frequency and type in bilingual children. Single-word speech samples were collected from 33 children. Phonetically transcribed samples for the 3 groups (monolingual English children, English-Spanish bilingual children who were predominantly exposed to English, and English-Spanish bilingual children with relatively equal exposure to English and Spanish) were compared at 2 time points and for change over time for phonetic inventory, phoneme accuracy, and error pattern frequencies. Children demonstrated similar phonetic inventories. Some bilingual children produced Spanish phonemes in their English and produced few consonant cluster sequences. Bilingual children with relatively equal exposure to English and Spanish averaged more errors than did bilingual children who were predominantly exposed to English. Both bilingual groups showed higher error rates than English-only children overall, particularly for syllable-level error patterns. All language groups decreased in some error patterns, although the ones that decreased were not always the same across language groups. Some group differences of error patterns and accuracy were significant. Vowel error rates did not differ by language group. Exposure to English and Spanish may result in a higher English error rate in typically developing bilinguals, including the application of Spanish phonological properties to English. Slightly higher error rates are likely typical for bilingual preschool-aged children. Change over time at these time points for all 3 groups was similar, suggesting that all will reach an adult-like system in English with exposure and practice.

  1. Double Bright Band Observations with High-Resolution Vertically Pointing Radar, Lidar, and Profiles

    NASA Technical Reports Server (NTRS)

    Emory, Amber E.; Demoz, Belay; Vermeesch, Kevin; Hicks, Michael

    2014-01-01

    On 11 May 2010, an elevated temperature inversion associated with an approaching warm front produced two melting layers simultaneously, which resulted in two distinct bright bands as viewed from the ER-2 Doppler radar system, a vertically pointing, coherent X band radar located in Greenbelt, MD. Due to the high temporal resolution of this radar system, an increase in altitude of the melting layer of approximately 1.2 km in the time span of 4 min was captured. The double bright band feature remained evident for approximately 17 min, until the lower atmosphere warmed enough to dissipate the lower melting layer. This case shows the relatively rapid evolution of freezing levels in response to an advancing warm front over a 2 h time period and the descent of an elevated warm air mass with time. Although observations of double bright bands are somewhat rare, the ability to identify this phenomenon is important for rainfall estimation from spaceborne sensors because algorithms employing the restriction of a radar bright band to a constant height, especially when sampling across frontal systems, will limit the ability to accurately estimate rainfall.

  2. Modelling inflation in transportation, comunication and financial services using B-Spline time series model

    NASA Astrophysics Data System (ADS)

    Suparti; Prahutama, Alan; Santoso, Rukun

    2018-05-01

    Inflation is an increase in the price of goods and services in general where the goods and services are the basic needs of society or the decline of the selling power of a country’s currency. Significant inflationary increases occurred in 2013. This increase was contributed by a significant increase in some inflation sectors / groups i.e transportation, communication and financial services; the foodstuff sector, and the housing, water, electricity, gas and fuel sectors. However, significant contributions occurred in the transportation, communications and financial services sectors. In the model of IFIs in the transportation, communication and financial services sector use the B-Spline time series approach, where the predictor variable is Yt, whereas the predictor is a significant lag (in this case Yt-1). In modeling B-spline time series determined the order and the optimum knot point. Optimum knot determination using Generalized Cross Validation (GCV). In inflation modeling for transportation sector, communication and financial services obtained model of B-spline order 2 with 2 points knots produce MAPE less than 50%.

  3. Double bright band observations with high-resolution vertically pointing radar, lidar, and profilers

    NASA Astrophysics Data System (ADS)

    Emory, Amber E.; Demoz, Belay; Vermeesch, Kevin; Hicks, Micheal

    2014-07-01

    On 11 May 2010, an elevated temperature inversion associated with an approaching warm front produced two melting layers simultaneously, which resulted in two distinct bright bands as viewed from the ER-2 Doppler radar system, a vertically pointing, coherent X band radar located in Greenbelt, MD. Due to the high temporal resolution of this radar system, an increase in altitude of the melting layer of approximately 1.2 km in the time span of 4 min was captured. The double bright band feature remained evident for approximately 17 min, until the lower atmosphere warmed enough to dissipate the lower melting layer. This case shows the relatively rapid evolution of freezing levels in response to an advancing warm front over a 2 h time period and the descent of an elevated warm air mass with time. Although observations of double bright bands are somewhat rare, the ability to identify this phenomenon is important for rainfall estimation from spaceborne sensors because algorithms employing the restriction of a radar bright band to a constant height, especially when sampling across frontal systems, will limit the ability to accurately estimate rainfall.

  4. ACTS Multibeam Antenna On-Orbit Performance

    NASA Technical Reports Server (NTRS)

    Acosta, R.; Wright, D.; Mitchell, Kenneth

    1996-01-01

    The Advanced Communications Technology Satellite (ACTS) launched in September 1993 introduces several new technologies including a multibeam antenna (MBA) operating at Ka-band. The MBA with fixed and rapidly reconfigurable spot beams serves users equipped with small aperture terminals within the coverage area. The antenna produces spot beams with approximately 0.3 degrees beamwidth and gains of approximately 50 dBi. A number of MBA performance evaluations have been performed since the ACTS launch. These evaluations were designed to assess MBA performance (e.g., beam pointing stability, beam shape, gain, etc.) in the space environment. The on-orbit measurements found systematic environmental perturbation to the MBA beam pointing. These perturbations were found to be imposed by satellite attitude control system, antenna and spacecraft mechanical alignments, on-orbit thermal effects, etc. As a result, the footprint coverage of the MBA may not exactly cover the intended service area at all times. This report describes the space environment effects on the ACTS MBA performance as a function of time of the day and time of the year and compensation approaches for these effects.

  5. High-throughput ultraviolet photoacoustic microscopy with multifocal excitation

    NASA Astrophysics Data System (ADS)

    Imai, Toru; Shi, Junhui; Wong, Terence T. W.; Li, Lei; Zhu, Liren; Wang, Lihong V.

    2018-03-01

    Ultraviolet photoacoustic microscopy (UV-PAM) is a promising intraoperative tool for surgical margin assessment (SMA), one that can provide label-free histology-like images with high resolution. In this study, using a microlens array and a one-dimensional (1-D) array ultrasonic transducer, we developed a high-throughput multifocal UV-PAM (MF-UV-PAM). Our new system achieved a 1.6 ± 0.2 μm lateral resolution and produced images 40 times faster than the previously developed point-by-point scanning UV-PAM. MF-UV-PAM provided a readily comprehensible photoacoustic image of a mouse brain slice with specific absorption contrast in ˜16 min, highlighting cell nuclei. Individual cell nuclei could be clearly resolved, showing its practical potential for intraoperative SMA.

  6. Development and Characterization of a Laser-Induced Acoustic Desorption Source.

    PubMed

    Huang, Zhipeng; Ossenbrüggen, Tim; Rubinsky, Igor; Schust, Matthias; Horke, Daniel A; Küpper, Jochen

    2018-03-20

    A laser-induced acoustic desorption source, developed for use at central facilities, such as free-electron lasers, is presented. It features prolonged measurement times and a fixed interaction point. A novel sample deposition method using aerosol spraying provides a uniform sample coverage and hence stable signal intensity. Utilizing strong-field ionization as a universal detection scheme, the produced molecular plume is characterized in terms of number density, spatial extend, fragmentation, temporal distribution, translational velocity, and translational temperature. The effect of desorption laser intensity on these plume properties is evaluated. While translational velocity is invariant for different desorption laser intensities, pointing to a nonthermal desorption mechanism, the translational temperature increases significantly and higher fragmentation is observed with increased desorption laser fluence.

  7. Segmentation of time series with long-range fractal correlations.

    PubMed

    Bernaola-Galván, P; Oliver, J L; Hackenberg, M; Coronado, A V; Ivanov, P Ch; Carpena, P

    2012-06-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome.

  8. Novel pH sensing semiconductor for point-of-care detection of HIV-1 viremia

    PubMed Central

    Gurrala, R.; Lang, Z.; Shepherd, L.; Davidson, D.; Harrison, E.; McClure, M.; Kaye, S.; Toumazou, C.; Cooke, G. S.

    2016-01-01

    The timely detection of viremia in HIV-infected patients receiving antiviral treatment is key to ensuring effective therapy and preventing the emergence of drug resistance. In high HIV burden settings, the cost and complexity of diagnostics limit their availability. We have developed a novel complementary metal-oxide semiconductor (CMOS) chip based, pH-mediated, point-of-care HIV-1 viral load monitoring assay that simultaneously amplifies and detects HIV-1 RNA. A novel low-buffer HIV-1 pH-LAMP (loop-mediated isothermal amplification) assay was optimised and incorporated into a pH sensitive CMOS chip. Screening of 991 clinical samples (164 on the chip) yielded a sensitivity of 95% (in vitro) and 88.8% (on-chip) at >1000 RNA copies/reaction across a broad spectrum of HIV-1 viral clades. Median time to detection was 20.8 minutes in samples with >1000 copies RNA. The sensitivity, specificity and reproducibility are close to that required to produce a point-of-care device which would be of benefit in resource poor regions, and could be performed on an USB stick or similar low power device. PMID:27829667

  9. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure

    PubMed Central

    Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-01-01

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978

  10. Assessment of a low-cost, point-of-use, ultraviolet water disinfection technology.

    PubMed

    Brownell, Sarah A; Chakrabarti, Alicia R; Kaser, Forest M; Connelly, Lloyd G; Peletz, Rachel L; Reygadas, Fermin; Lang, Micah J; Kammen, Daniel M; Nelson, Kara L

    2008-03-01

    We describe a point-of-use (POU) ultraviolet (UV) disinfection technology, the UV Tube, which can be made with locally available resources around the world for under $50 US. Laboratory and field studies were conducted to characterize the UV Tube's performance when treating a flowrate of 5 L/min. Based on biological assays with MS2 coliphage, the UV Tube delivered an average fluence of 900+/-80 J/m(2) (95% CI) in water with an absorption coefficient of 0.01 cm(-1). The residence time distribution in the UV Tube was characterized as plug flow with dispersion (Peclet Number = 19.7) and a mean hydraulic residence time of 36 s. Undesirable compounds were leached or produced from UV Tubes constructed with unlined ABS, PVC, or a galvanized steel liner. Lining the PVC pipe with stainless steel, however, prevented production of regulated halogenated organics. A small field study in two rural communities in Baja California Sur demonstrated that the UV Tube reduced E. coli concentrations to less than 1/100 ml in 65 out of 70 samples. Based on these results, we conclude that the UV Tube is a promising technology for treating household drinking water at the point of use.

  11. Microsoft Producer: A Software Tool for Creating Multimedia PowerPoint[R] Presentations

    ERIC Educational Resources Information Center

    Leffingwell, Thad R.; Thomas, David G.; Elliott, William H.

    2007-01-01

    Microsoft[R] Producer[R] is a powerful yet user-friendly PowerPoint companion tool for creating on-demand multimedia presentations. Instructors can easily distribute these presentations via compact disc or streaming media over the Internet. We describe the features of the software, system requirements, and other required hardware. We also describe…

  12. Navy-After-Next Contingency Producible Corvette (CPC): Emergency Production Historical Study

    DTIC Science & Technology

    2004-02-01

    build many or all of the ships, thus, avoiding the need for traditional destroyer builders to take on the work, and (3) allow non- traditionally-Navy... traditional yards and the time necessary to create the design and prepare the shipyards. To determine the effectiveness of the two approaches, this report...estimated that two and a half years would be necessary to complete the 200 standardized destroyers. As proof, they pointed out that the traditional

  13. The Consequences of Interdependence: A Policy Point of View

    DTIC Science & Technology

    1975-10-01

    surplunos to world food shortages of serious proportions, This occured at a time when the US and most of the world faeed a most serious influtionary...Amer:ican taxpayer and consumer and to the efficiency of the American producer; in the US the practice has proliferated seriously into acts of states...bill (S. 613) designed to reduce litter (and to save enercy) by pro- . hibitinG the introduction into interstate commerce of non-returnable beverage

  14. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  15. Particle acceleration in explosive relativistic reconnection events and Crab Nebula gamma-ray flares

    NASA Astrophysics Data System (ADS)

    Lyutikov, Maxim; Komissarov, Serguei; Sironi, Lorenzo

    2018-04-01

    We develop a model of gamma-ray flares of the Crab Nebula resulting from the magnetic reconnection events in a highly magnetised relativistic plasma. We first discuss physical parameters of the Crab Nebula and review the theory of pulsar winds and termination shocks. We also review the principle points of particle acceleration in explosive reconnection events [Lyutikov et al., J. Plasma Phys., vol. 83(6), p. 635830601 (2017a); J. Plasma Phys., vol. 83(6), p. 635830602 (2017b)]. It is required that particles producing flares are accelerated in highly magnetised regions of the nebula. Flares originate from the poleward regions at the base of the Crab's polar outflow, where both the magnetisation and the magnetic field strength are sufficiently high. The post-termination shock flow develops macroscopic (not related to the plasma properties on the skin-depth scale) kink-type instabilities. The resulting large-scale magnetic stresses drive explosive reconnection events on the light-crossing time of the reconnection region. Flares are produced at the initial stage of the current sheet development, during the X-point collapse. The model has all the ingredients needed for Crab flares: natural formation of highly magnetised regions, explosive dynamics on the light travel time, development of high electric fields on macroscopic scales and acceleration of particles to energies well exceeding the average magnetic energy per particle.

  16. Coastal geology and recent origins for Sand Point, Lake Superior

    USGS Publications Warehouse

    Fisher, Timothy G.; Krantz, David E.; Castaneda, Mario R.; Loope, Walter L.; Jol, Harry M.; Goble, Ronald J.; Higley, Melinda C.; DeWald, Samantha; Hansen, Paul

    2014-01-01

    Sand Point is a small cuspate foreland located along the southeastern shore of Lake Superior within Pictured Rocks National Lakeshore near Munising, Michigan. Park managers’ concerns for the integrity of historic buildings at the northern periphery of the point during the rising lake levels in the mid-1980s greatly elevated the priority of research into the geomorphic history and age of Sand Point. To pursue this priority, we recovered sediment cores from four ponds on Sand Point, assessed subsurface stratigraphy onshore and offshore using geophysical techniques, and interpreted the chronology of events using radiocarbon and luminescence dating. Sand Point formed at the southwest edge of a subaqueous platform whose base is probably constructed of glacial diamicton and outwash. During the post-glacial Nipissing Transgression, the base was mantled with sand derived from erosion of adjacent sandstone cliffs. An aerial photograph time sequence, 1939–present, shows that the periphery of the platform has evolved considerably during historical time, infl uenced by transport of sediment into adjacent South Bay. Shallow seismic refl ections suggest slump blocks along the leading edge of the platform. Light detection and ranging (LiDAR) and shallow seismic refl ections to the northwest of the platform reveal large sand waves within a deep (12 m) channel produced by currents fl owing episodically to the northeast into Lake Superior. Ground-penetrating radar profi les show transport and deposition of sand across the upper surface of the platform. Basal radiocarbon dates from ponds between subaerial beach ridges range in age from 540 to 910 cal yr B.P., suggesting that Sand Point became emergent during the last ~1000 years, upon the separation of Lake Superior from Lakes Huron and Michigan. However, optically stimulated luminescence (OSL) ages from the beach ridges were two to three times as old as the radiocarbon ages, implying that emergence of Sand Point may have begun earlier, ~2000 years ago. The age discrepancy appears to be the result of incomplete bleaching of the quartz grains and an exceptionally low paleodose rate for the OSL samples. Given the available data, the younger ages from the radiocarbon analyses are preferred, but further work is necessary to test the two age models.

  17. Hierarchical and symmetric infant image registration by robust longitudinal-example-guided correspondence detection

    PubMed Central

    Wu, Yao; Wu, Guorong; Wang, Li; Munsell, Brent C.; Wang, Qian; Lin, Weili; Feng, Qianjin; Chen, Wufan; Shen, Dinggang

    2015-01-01

    Purpose: To investigate anatomical differences across individual subjects, or longitudinal changes in early brain development, it is important to perform accurate image registration. However, due to fast brain development and dynamic tissue appearance changes, it is very difficult to align infant brain images acquired from birth to 1-yr-old. Methods: To solve this challenging problem, a novel image registration method is proposed to align two infant brain images, regardless of age at acquisition. The main idea is to utilize the growth trajectories, or spatial-temporal correspondences, learned from a set of longitudinal training images, for guiding the registration of two different time-point images with different image appearances. Specifically, in the training stage, an intrinsic growth trajectory is first estimated for each training subject using the longitudinal images. To register two new infant images with potentially a large age gap, the corresponding images patches between each new image and its respective training images with similar age are identified. Finally, the registration between the two new images can be assisted by the learned growth trajectories from one time point to another time point that have been established in the training stage. To further improve registration accuracy, the proposed method is combined with a hierarchical and symmetric registration framework that can iteratively add new key points in both images to steer the estimation of the deformation between the two infant brain images under registration. Results: To evaluate image registration accuracy, the proposed method is used to align 24 infant subjects at five different time points (2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old). Compared to the state-of-the-art methods, the proposed method demonstrated superior registration performance. Conclusions: The proposed method addresses the difficulties in the infant brain registration and produces better results compared to existing state-of-the-art registration methods. PMID:26133617

  18. Application of dynamic topic models to toxicogenomics data.

    PubMed

    Lee, Mikyung; Liu, Zhichao; Huang, Ruili; Tong, Weida

    2016-10-06

    All biological processes are inherently dynamic. Biological systems evolve transiently or sustainably according to sequential time points after perturbation by environment insults, drugs and chemicals. Investigating the temporal behavior of molecular events has been an important subject to understand the underlying mechanisms governing the biological system in response to, such as, drug treatment. The intrinsic complexity of time series data requires appropriate computational algorithms for data interpretation. In this study, we propose, for the first time, the application of dynamic topic models (DTM) for analyzing time-series gene expression data. A large time-series toxicogenomics dataset was studied. It contains over 3144 microarrays of gene expression data corresponding to rat livers treated with 131 compounds (most are drugs) at two doses (control and high dose) in a repeated schedule containing four separate time points (4-, 8-, 15- and 29-day). We analyzed, with DTM, the topics (consisting of a set of genes) and their biological interpretations over these four time points. We identified hidden patterns embedded in this time-series gene expression profiles. From the topic distribution for compound-time condition, a number of drugs were successfully clustered by their shared mode-of-action such as PPARɑ agonists and COX inhibitors. The biological meaning underlying each topic was interpreted using diverse sources of information such as functional analysis of the pathways and therapeutic uses of the drugs. Additionally, we found that sample clusters produced by DTM are much more coherent in terms of functional categories when compared to traditional clustering algorithms. We demonstrated that DTM, a text mining technique, can be a powerful computational approach for clustering time-series gene expression profiles with the probabilistic representation of their dynamic features along sequential time frames. The method offers an alternative way for uncovering hidden patterns embedded in time series gene expression profiles to gain enhanced understanding of dynamic behavior of gene regulation in the biological system.

  19. Assessment of the Quality of Digital Terrain Model Produced from Unmanned Aerial System Imagery

    NASA Astrophysics Data System (ADS)

    Kosmatin Fras, M.; Kerin, A.; Mesarič, M.; Peterman, V.; Grigillo, D.

    2016-06-01

    Production of digital terrain model (DTM) is one of the most usual tasks when processing photogrammetric point cloud generated from Unmanned Aerial System (UAS) imagery. The quality of the DTM produced in this way depends on different factors: the quality of imagery, image orientation and camera calibration, point cloud filtering, interpolation methods etc. However, the assessment of the real quality of DTM is very important for its further use and applications. In this paper we first describe the main steps of UAS imagery acquisition and processing based on practical test field survey and data. The main focus of this paper is to present the approach to DTM quality assessment and to give a practical example on the test field data. For data processing and DTM quality assessment presented in this paper mainly the in-house developed computer programs have been used. The quality of DTM comprises its accuracy, density, and completeness. Different accuracy measures like RMSE, median, normalized median absolute deviation and their confidence interval, quantiles are computed. The completeness of the DTM is very often overlooked quality parameter, but when DTM is produced from the point cloud this should not be neglected as some areas might be very sparsely covered by points. The original density is presented with density plot or map. The completeness is presented by the map of point density and the map of distances between grid points and terrain points. The results in the test area show great potential of the DTM produced from UAS imagery, in the sense of detailed representation of the terrain as well as good height accuracy.

  20. Photo-producible and photo-degradable starch/TiO2 bionanocomposite as a food packaging material: Development and characterization.

    PubMed

    Goudarzi, Vahid; Shahabi-Ghahfarrokhi, Iman

    2018-01-01

    In current study, starch/TiO 2 bionanocomposites were produced by photochemical reactions as a biodegradable food packaging material. Physical, mechanical, thermal and water-vapor permeability properties were investigated. Then, the photo-degradation properties of nanocomposite films were studied. This is the first report of the photo-producible and photo-degradable bionanocomposite as a food packaging material. Film-forming solutions were exposed to ultraviolet A (UV-A) for different times. Our results showed that UV-A irradiation increased the hydrophobicity of starch films. With increasing UV-A exposure time, tensile strength and Young's modulus of the specimens were decreased. On the other hand, elongation at break of the films was increased with increasing UV-A irradiation. The glass transition temperature and melting point of the films were increased by increasing UV-A exposure time. Nevertheless, the results showed that photo-degradation properties of photo-produced starch/TiO 2 nanocomposite were significantly higher than virgin starch and virgin starch/TiO 2 films. According to obtain results and bibliography a schema was developed to describe the mechanism of photo-production and photo-degradation of starch/TiO 2 by UV-A ray. It can be concluded, the modification of starch based biopolymer by UV-A and nano-TiO 2 , is an easy and accessible process to improve the packaging properties and photo-degradability of biopolymer based films. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Characterization of extended channel bioreactors for continuous-flow protein production

    DOE PAGES

    Timm, Andrea C.; Shankles, Peter G.; Foster, Carmen M.; ...

    2015-10-02

    In this paper, protein based therapeutics are an important class of drugs, used to treat a variety of medical conditions including cancer and autoimmune diseases. Requiring continuous cold storage, and having a limited shelf life, the ability to produce such therapeutics at the point-of-care would open up new opportunities in distributing medicines and treating patients in more remote locations. Here, the authors describe the first steps in the development of a microfluidic platform that can be used for point-of-care protein synthesis. While biologic medicines, including therapeutic proteins, are commonly produced using recombinant deoxyribonucleic acid (DNA) technology in large batch cellmore » cultures, the system developed here utilizes cell-free protein synthesis (CFPS) technology. CFPS is a scalable technology that uses cell extracts containing the biological machinery required for transcription and translation and combines those extracts with DNA, encoding a specific gene, and the additional metabolites required to produce proteins in vitro. While CFPS reactions are typically performed in batch or fed-batch reactions, a well-engineered reaction scheme may improve both the rate of protein production and the economic efficiency of protein synthesis reactions, as well as enable a more streamlined method for subsequent purification of the protein product—all necessary requirements for point-of-care protein synthesis. In this work, the authors describe a new bioreactor design capable of continuous production of protein using cell-free protein synthesis. The bioreactors were designed with three inlets to separate reactive components prior to on-chip mixing, which lead into a long, narrow, serpentine channel. These multiscale, serpentine channel bioreactors were designed to take advantage of microscale diffusion distances across narrow channels in reactors containing enough volume to produce a therapeutic dose of protein, and open the possibility of performing these reactions continuously and in line with downstream purification modules. Here, the authors demonstrate the capability to produce protein over time with continuous-flow reactions and examine basic design features and operation specifications fundamental to continuous microfluidic protein synthesis.« less

  2. Origin of acoustic emission produced during single point machining

    NASA Astrophysics Data System (ADS)

    Heiple, C. R.; Carpenter, S. H.; Armentrout, D. L.

    1991-05-01

    Acoustic emission was monitored during single point, continuous machining of 4340 steel and Ti-6Al-4V as a function of heat treatment. Acoustic emission produced during tensile and compressive deformation of these alloys has been previously characterized as a function of heat treatment. Heat treatments which increase the strength of 4340 steel increase the amount of acoustic emission produced during deformation, while heat treatments which increase the strength of Ti-6Al-4V decrease the amount of acoustic emission produced during deformation. If chip deformation were the primary source of acoustic emission during single point machining, then opposite trends in the level of acoustic emission produced during machining as a function of material strength would be expected for these two alloys. Trends in rms acoustic emission level with increasing strength were similar for both alloys, demonstrating that chip deformation is not a major source of acoustic emission in single point machining. Acoustic emission has also been monitored as a function of machining parameters on 6061-T6 aluminum, 304 stainless steel, 17-4PH stainless steel, lead, and teflon. The data suggest that sliding friction between the nose and/or flank of the tool and the newly machined surface is the primary source of acoustic emission. Changes in acoustic emission with tool wear were strongly material dependent.

  3. Using ToxCast™ Data to Reconstruct Dynamic Cell State Trajectories and Estimate Toxicological Points of Departure

    PubMed Central

    Shah, Imran; Setzer, R. Woodrow; Jack, John; Houck, Keith A.; Judson, Richard S.; Knudsen, Thomas B.; Liu, Jie; Martin, Matthew T.; Reif, David M.; Richard, Ann M.; Thomas, Russell S.; Crofton, Kevin M.; Dix, David J.; Kavlock, Robert J.

    2015-01-01

    Background: High-content imaging (HCI) allows simultaneous measurement of multiple cellular phenotypic changes and is an important tool for evaluating the biological activity of chemicals. Objectives: Our goal was to analyze dynamic cellular changes using HCI to identify the “tipping point” at which the cells did not show recovery towards a normal phenotypic state. Methods: HCI was used to evaluate the effects of 967 chemicals (in concentrations ranging from 0.4 to 200 μM) on HepG2 cells over a 72-hr exposure period. The HCI end points included p53, c-Jun, histone H2A.x, α-tubulin, histone H3, alpha tubulin, mitochondrial membrane potential, mitochondrial mass, cell cycle arrest, nuclear size, and cell number. A computational model was developed to interpret HCI responses as cell-state trajectories. Results: Analysis of cell-state trajectories showed that 336 chemicals produced tipping points and that HepG2 cells were resilient to the effects of 334 chemicals up to the highest concentration (200 μM) and duration (72 hr) tested. Tipping points were identified as concentration-dependent transitions in system recovery, and the corresponding critical concentrations were generally between 5 and 15 times (25th and 75th percentiles, respectively) lower than the concentration that produced any significant effect on HepG2 cells. The remaining 297 chemicals require more data before they can be placed in either of these categories. Conclusions: These findings show the utility of HCI data for reconstructing cell state trajectories and provide insight into the adaptation and resilience of in vitro cellular systems based on tipping points. Cellular tipping points could be used to define a point of departure for risk-based prioritization of environmental chemicals. Citation: Shah I, Setzer RW, Jack J, Houck KA, Judson RS, Knudsen TB, Liu J, Martin MT, Reif DM, Richard AM, Thomas RS, Crofton KM, Dix DJ, Kavlock RJ. 2016. Using ToxCast™ data to reconstruct dynamic cell state trajectories and estimate toxicological points of departure. Environ Health Perspect 124:910–919; http://dx.doi.org/10.1289/ehp.1409029 PMID:26473631

  4. Improving photoprotection attitudes in the tropics: sunburn vs vitamin D.

    PubMed

    Silva, Abel A

    2014-01-01

    The ultraviolet radiation of type B (the UVB) stimulates both the production of vitamin D (VD) and the incorporation of erythema dose (ED). The UVA also contributes to ED. The turning point between the benefit of producing VD and the harm of incorporating ED cannot be determined easily. However, the casual behavior regarding the exposure to the Sun can be changed in order to improve the protoprotection attitudes and create a trend towards benefit. In the case, people living in the low latitudes should exposure themselves to the Sun for a determined time interval within the noon time and avoid the Sun in other periods. This would produce an adequate amount of VD through the VD dose (207-214 J m(-2)) against minimum ED (≈105 J m(-2)) for skin type II. For it, unprotected forearms and hands must be exposed to the noon Sun (cloudless) for 11 min (winter) and 5 min (summer). The exposure at other times different from noon can represent increases of up to 24% in ED and up to 12 times in the time interval to be in the Sun in relation to the minimum amounts of both ED and time interval at noon. © 2014 The American Society of Photobiology.

  5. Transition and mixing in axisymmetric jets and vortex rings

    NASA Technical Reports Server (NTRS)

    Allen, G. A., Jr.; Cantwell, B. J.

    1986-01-01

    A class of impulsively started, axisymmetric, laminar jets produced by a time dependent joint source of momentum are considered. These jets are different flows, each initially at rest in an unbounded fluid. The study is conducted at three levels of detail. First, a generalized set of analytic creeping flow solutions are derived with a method of flow classification. Second, from this set, three specific creeping flow solutions are studied in detail: the vortex ring, the round jet, and the ramp jet. This study involves derivation of vorticity, stream function, entrainment diagrams, and evolution of time lines through computer animation. From entrainment diagrams, critical points are derived and analyzed. The flow geometry is dictated by the properties and location of critical points which undergo bifurcation and topological transformation (a form of transition) with changing Reynolds number. Transition Reynolds numbers were calculated. A state space trajectory was derived describing the topological behavior of these critical points. This state space derivation yielded three states of motion which are universal for all axisymmetric jets. Third, the axisymmetric round jet is solved numerically using the unsteady laminar Navier Stokes equations. These equations were shown to be self similar for the round jet. Numerical calculations were performed up to a Reynolds number of 30 for a 60x60 point mesh. Animations generated from numerical solution showed each of the three states of motion for the round jet, including the Re = 30 case.

  6. Recognizing Biological Motion and Emotions from Point-Light Displays in Autism Spectrum Disorders

    PubMed Central

    Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P.; Wenderoth, Nicole; Alaerts, Kaat

    2012-01-01

    One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in ‘reading’ body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of ‘biological motion’ and ‘emotions’ from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person’s ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance. PMID:22970227

  7. Some mechanisms for the formation of octopus-shaped iron micro-particles

    NASA Astrophysics Data System (ADS)

    Bica, Ioan

    2004-08-01

    Fluid spheres (micro-spheres or/and drops) are formed out of the metallic solid (the carbon steel semi-finished product) in the argon plasma of the transferred electric arc. For short intervals of time, the spheres are at rest with relation to vapors. The movement of the vapors around the spheres is in the same plane. It consists of a movement around a circle combined with the movement produced by a definitely located whirl. The molar concentration of the vapors is small in comparison with the molar density of the mixture formed of vapors and gas. At the intersection of the sphere and the plane of movement of the vapors, distinct stagnation point is formed. They constitute points of the beginning/and end of the current lines. Each current line is a carrier of a vapor cylinder. In time, the cylinder-gas interface reaches points of temperature equal to that of the "dew point" for iron. On this occasion a liquid membrane is formed. It delimits the vapor-gas mixture from the rest of the gas. Subsequent to the process of diffusion in non-stationary condition, the membrane becomes thicker and no vapors exist inside the tube. Needle-shaped micro-tubes are formed, in liquid phase, around the fluid sphere. By solidification, micro-particles occur, consisting of a central nucleus around which ligaments branch out.

  8. Recognizing biological motion and emotions from point-light displays in autism spectrum disorders.

    PubMed

    Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P; Wenderoth, Nicole; Alaerts, Kaat

    2012-01-01

    One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in 'reading' body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of 'biological motion' and 'emotions' from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person's ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance.

  9. Esterification Reaction of Glycerol and Palm Oil Oleic Acid Using Methyl Ester Sulfonate Acid Catalyst as Drilling Fluid Formulation

    NASA Astrophysics Data System (ADS)

    Sari, V. I.; Hambali, E.; Suryani, A.; Permadi, P.

    2017-02-01

    Esterification reaction between glycerol with palm oil oleic acid to produce glycerol ester and one of the utilization of glycerol esters is as ingredients of drilling fluids formula for oil drilling needs. The purpose of this research is to get the best conditions of the esterification process. The esterification reaction does with the reactants is glycerol with purity of 97.6%, palm oil oleic acid with the molar ratio is 1:1, Methyl Ester Sulfonate Acid (MESA) catalyst 0.5%, and stirring speed 400 rpm. The temperature range of 180°C to 240°C and the processing time between 120 to 180 minutes. The results showed that the best conditions of the esterification reaction at the temperature 240°C and time process are 180 minute. The increasing temperature resulted that the acid number decreases and causing the conversion increased. The maximum conversion is 99.24%, density 0.93 g/cm3, flash point 241°C, pour point -3°C, the boiling point of 244 °C, the acid value of 1.90 mg KOH/g sample, kinematic viscosity 31.51 cSt (40°C), surface tension 37.0526 dyne/cm and GCMS identification, glycerol ester at 22,256 retention time (minutes) and wide area 73.75 (%). From the research results obtained glycerol ester with characteristics suitable for drilling fluid formulations.

  10. Hepatic gene expression patterns following trauma-hemorrhage: effect of posttreatment with estrogen.

    PubMed

    Yu, Huang-Ping; Pang, See-Tong; Chaudry, Irshad H

    2013-01-01

    The aim of this study was to examine the role of estrogen on hepatic gene expression profiles at an early time point following trauma-hemorrhage in rats. Groups of injured and sham controls receiving estrogen or vehicle were killed 2 h after injury and resuscitation, and liver tissue was harvested. Complementary RNA was synthesized from each RNA sample and hybridized to microarrays. A large number of genes were differentially expressed at the 2-h time point in injured animals with or without estrogen treatment. The upregulation or downregulation of a cohort of 14 of these genes was validated by reverse transcription-polymerase chain reaction. This large-scale microarray analysis shows that at the 2-h time point, there is marked alteration in hepatic gene expression following trauma-hemorrhage. However, estrogen treatment attenuated these changes in injured animals. Pathway analysis demonstrated predominant changes in the expression of genes involved in metabolism, immunity, and apoptosis. Upregulation of low-density lipoprotein receptor, protein phosphatase 1, regulatory subunit 3C, ring-finger protein 11, pyroglutamyl-peptidase I, bactericidal/permeability-increasing protein, integrin, αD, BCL2-like 11, leukemia inhibitory factor receptor, ATPase, Cu transporting, α polypeptide, and Mk1 protein was found in estrogen-treated trauma-hemorrhaged animals. Thus, estrogen produces hepatoprotection following trauma-hemorrhage likely via antiapoptosis and improving/restoring metabolism and immunity pathways.

  11. Core Research Program, Year 5

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dramatic losses of bone mineral density (BMD) and muscle strength are two of the best documented changes observed in humans after prolonged exposure to microgravity. Recovery of muscle upon return to a 1-G environment is well studied, however, far less is known about the rate and completeness of BMD recovery to pre-flight values. Using the mature tail-suspended adult rat model, this proposal will focus on the temporal course of recovery in tibial bone following a 28-d period of skeletal unloading. Through the study of bone density and muscle strength in the same animal, time-points during recovery from simulated microgravity will be identified when bone is at an elevated risk for fracture. These will occur due to the rapid recovery of muscle strength coupled with a slower recovery of bone, producing a significant mismatch in functional strength of these two tissues. Once the time-point of maximal mismatch is defined, various mechanical and pharmacological interventions will be tested at and around this time-point in attempt to minimize the functional difference of bone and muscle. The outcomes of this research will have high relevance for optimizing the rehabilitation of astronauts upon return to Earth, as well as upon landing on the Martian surface before assuming arduous physical tasks. Further. it will impact significantly on rehabilitation issues common to patients experiencing long periods of limb immobilization or bed rest.

  12. Babies in traffic: infant vocalizations and listener sex modulate auditory motion perception.

    PubMed

    Neuhoff, John G; Hamilton, Grace R; Gittleson, Amanda L; Mejia, Adolfo

    2014-04-01

    Infant vocalizations and "looming sounds" are classes of environmental stimuli that are critically important to survival but can have dramatically different emotional valences. Here, we simultaneously presented listeners with a stationary infant vocalization and a 3D virtual looming tone for which listeners made auditory time-to-arrival judgments. Negatively valenced infant cries produced more cautious (anticipatory) estimates of auditory arrival time of the tone over a no-vocalization control. Positively valenced laughs had the opposite effect, and across all conditions, men showed smaller anticipatory biases than women. In Experiment 2, vocalization-matched vocoded noise stimuli did not influence concurrent auditory time-to-arrival estimates compared with a control condition. In Experiment 3, listeners estimated the egocentric distance of a looming tone that stopped before arriving. For distant stopping points, women estimated the stopping point as closer when the tone was presented with an infant cry than when it was presented with a laugh. For near stopping points, women showed no differential effect of vocalization type. Men did not show differential effects of vocalization type at either distance. Our results support the idea that both the sex of the listener and the emotional valence of infant vocalizations can influence auditory motion perception and can modulate motor responses to other behaviorally relevant environmental sounds. We also find support for previous work that shows sex differences in emotion processing are diminished under conditions of higher stress.

  13. Method for production of an isotopically enriched compound

    DOEpatents

    Watrous, Matthew G.

    2012-12-11

    A method is presented for producing and isolating an isotopically enriched compound of a desired isotope from a parent radionuclide. The method includes forming, or placing, a precipitate containing a parent radionuclide of the desired daughter isotope in a first reaction zone and allowing sufficient time for the parent to decay into the desired gaseous daughter radioisotope. The method further contemplates collecting the desired daughter isotope as a solid in a second reaction zone through the application of temperatures below the freezing point of the desired isotope to a second reaction zone that is connected to the first reaction zone. Specifically, a method is presented for producing isotopically enriched compounds of xenon, including the radioactive isotope Xe-131m and the stable isotope Xe-131.

  14. Parental investment: how an equity motive can produce inequality.

    PubMed

    Hertwig, Ralph; Davis, Jennifer Nerissa; Sulloway, Frank J

    2002-09-01

    The equity heuristic is a decision rule specifying that parents should attempt to subdivide resources more or less equally among their children. This investment rule coincides with the prescription from optimality models in economics and biology in cases in which expected future return for each offspring is equal. In this article, the authors present a counterintuitive implication of the equity heuristic: Whereas an equity motive produces a fair distribution at any given point in time, it yields a cumulative distribution of investments that is unequal. The authors test this analytical observation against evidence reported in studies exploring parental investment and show how the equity heuristic can provide an explanation of why the literature reports a diversity of birth order effects with respect to parental resource allocation.

  15. Japanese contributions to MAP

    NASA Technical Reports Server (NTRS)

    Kato, S.

    1989-01-01

    Japan contributed much to MAP in many branches. The MU (middle and upper atmosphere) radar, in operation during the MAP period, produced various novel possibilities in observations of middle atmosphere dynamics; possibilities which were fairly well realized. Gravity wave saturation and its spectrum in the mesosphere were observed successfully. Campaign observations by radars between Kyoto and Adelaide were especially significant in tidal and planetary wave observations. In Antarctica, middle atmosphere observation of the dramatic behavior of aerosols in winter is well elucidated together with the ozone hole. Theoretical and numerical studies have been progressing actively since a time much earlier than MAP. Now it is pointed out that gravity waves play an important role in producing the weak wind region in the stratosphere as well as the mesosphere.

  16. Observation of CO 2 in Fourier transform infrared spectral measurements of living Acholeplasma laidlawii cells

    NASA Astrophysics Data System (ADS)

    Omura, Yoko; Okazaki, Norio

    2003-06-01

    In monitoring the time course of conformational disorder by Fourier transform infrared spectroscopy for intact Acholeplasma laidlawii cells grown at 37 °C on binary fatty acid mixtures containing oleic acid and for cells grown on pure palmitic acid, an absorption band at 2343 cm-1 was observed. The band intensity was found to increase with time. This band was not observed in the spectra for isolated membranes. It is suggested that the 2343 cm-1 band is due to CO2 dissolved in water, most likely produced at the final point of fermentation of amino acid by this microorganism.

  17. Time-dependent clustering analysis of the second BATSE gamma-ray burst catalog

    NASA Technical Reports Server (NTRS)

    Brainerd, J. J.; Meegan, C. A.; Briggs, Michael S.; Pendleton, G. N.; Brock, M. N.

    1995-01-01

    A time-dependent two-point correlation-function analysis of the Burst and Transient Source Experiment (BATSE) 2B catalog finds no evidence of burst repetition. As part of this analysis, we discuss the effects of sky exposure on the observability of burst repetition and present the equation describing the signature of burst repetition in the data. For a model of all burst repetition from a source occurring in less than five days we derive upper limits on the number of bursts in the catalog from repeaters and model-dependent upper limits on the fraction of burst sources that produce multiple outbursts.

  18. Improving the numerical integration solution of satellite orbits in the presence of solar radiation pressure using modified back differences

    NASA Technical Reports Server (NTRS)

    Lundberg, J. B.; Feulner, M. R.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    The method of modified back differences, a technique that significantly reduces the numerical integration errors associated with crossing shadow boundaries using a fixed-mesh multistep integrator without a significant increase in computer run time, is presented. While Hubbard's integral approach can produce significant improvements to the trajectory solution, the interpolation method provides the best overall results. It is demonstrated that iterating on the point mass term correction is also important for achieving the best overall results. It is also shown that the method of modified back differences can be implemented with only a small increase in execution time.

  19. Microwave sintering of nanophase ceramics without concomitant grain growth

    DOEpatents

    Eastman, Jeffrey A.; Sickafus, Kurt E.; Katz, Joel D.

    1993-01-01

    A method of sintering nanocrystalline material is disclosed wherein the nanocrystalline material is microwaved to heat the material to a temperature less than about 70% of the melting point of the nanocrystalline material expressed in degrees K. This method produces sintered nanocrystalline material having a density greater than about 95% of theoretical and an average grain size not more than about 3 times the average grain size of the nanocrystalline material before sintering. Rutile TiO.sub.2 as well as various other ceramics have been prepared. Grain growth of as little as 1.67 times has resulted with densities of about 90% of theoretical.

  20. Stability of discrete time recurrent neural networks and nonlinear optimization problems.

    PubMed

    Singh, Jayant; Barabanov, Nikita

    2016-02-01

    We consider the method of Reduction of Dissipativity Domain to prove global Lyapunov stability of Discrete Time Recurrent Neural Networks. The standard and advanced criteria for Absolute Stability of these essentially nonlinear systems produce rather weak results. The method mentioned above is proved to be more powerful. It involves a multi-step procedure with maximization of special nonconvex functions over polytopes on every step. We derive conditions which guarantee an existence of at most one point of local maximum for such functions over every hyperplane. This nontrivial result is valid for wide range of neuron transfer functions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. DTM Generation with Uav Based Photogrammetric Point Cloud

    NASA Astrophysics Data System (ADS)

    Polat, N.; Uysal, M.

    2017-11-01

    Nowadays Unmanned Aerial Vehicles (UAVs) are widely used in many applications for different purposes. Their benefits however are not entirely detected due to the integration capabilities of other equipment such as; digital camera, GPS, or laser scanner. The main scope of this paper is evaluating performance of cameras integrated UAV for geomatic applications by the way of Digital Terrain Model (DTM) generation in a small area. In this purpose, 7 ground control points are surveyed with RTK and 420 photographs are captured. Over 30 million georeferenced points were used in DTM generation process. Accuracy of the DTM was evaluated with 5 check points. The root mean square error is calculated as 17.1 cm for an altitude of 100 m. Besides, a LiDAR derived DTM is used as reference in order to calculate correlation. The UAV based DTM has o 94.5 % correlation with reference DTM. Outcomes of the study show that it is possible to use the UAV Photogrammetry data as map producing, surveying, and some other engineering applications with the advantages of low-cost, time conservation, and minimum field work.

  2. Access to Mars from Earth-Moon Libration Point Orbits:. [Manifold and Direct Options

    NASA Technical Reports Server (NTRS)

    Kakoi, Masaki; Howell, Kathleen C.; Folta, David

    2014-01-01

    This investigation is focused specifically on transfers from Earth-Moon L(sub 1)/L(sub 2) libration point orbits to Mars. Initially, the analysis is based in the circular restricted three-body problem to utilize the framework of the invariant manifolds. Various departure scenarios are compared, including arcs that leverage manifolds associated with the Sun-Earth L(sub 2) orbits as well as non-manifold trajectories. For the manifold options, ballistic transfers from Earth-Moon L(sub 2) libration point orbits to Sun-Earth L(sub 1)/L(sub 2) halo orbits are first computed. This autonomous procedure applies to both departure and arrival between the Earth-Moon and Sun-Earth systems. Departure times in the lunar cycle, amplitudes and types of libration point orbits, manifold selection, and the orientation/location of the surface of section all contribute to produce a variety of options. As the destination planet, the ephemeris position for Mars is employed throughout the analysis. The complete transfer is transitioned to the ephemeris model after the initial design phase. Results for multiple departure/arrival scenarios are compared.

  3. A Review of High-Order and Optimized Finite-Difference Methods for Simulating Linear Wave Phenomena

    NASA Technical Reports Server (NTRS)

    Zingg, David W.

    1996-01-01

    This paper presents a review of high-order and optimized finite-difference methods for numerically simulating the propagation and scattering of linear waves, such as electromagnetic, acoustic, or elastic waves. The spatial operators reviewed include compact schemes, non-compact schemes, schemes on staggered grids, and schemes which are optimized to produce specific characteristics. The time-marching methods discussed include Runge-Kutta methods, Adams-Bashforth methods, and the leapfrog method. In addition, the following fourth-order fully-discrete finite-difference methods are considered: a one-step implicit scheme with a three-point spatial stencil, a one-step explicit scheme with a five-point spatial stencil, and a two-step explicit scheme with a five-point spatial stencil. For each method studied, the number of grid points per wavelength required for accurate simulation of wave propagation over large distances is presented. Recommendations are made with respect to the suitability of the methods for specific problems and practical aspects of their use, such as appropriate Courant numbers and grid densities. Avenues for future research are suggested.

  4. Using Deep Space Climate Observatory Measurements to Study the Earth as an Exoplanet

    NASA Astrophysics Data System (ADS)

    Jiang, Jonathan H.; Zhai, Albert J.; Herman, Jay; Zhai, Chengxing; Hu, Renyu; Su, Hui; Natraj, Vijay; Li, Jiazheng; Xu, Feng; Yung, Yuk L.

    2018-07-01

    Even though it was not designed as an exoplanetary research mission, the Deep Space Climate Observatory ( DSCOVR ) has been opportunistically used for a novel experiment in which Earth serves as a proxy exoplanet. More than 2 yr of DSCOVR Earth images were employed to produce time series of multiwavelength, single-point light sources in order to extract information on planetary rotation, cloud patterns, surface type, and orbit around the Sun. In what follows, we assume that these properties of the Earth are unknown and instead attempt to derive them from first principles. These conclusions are then compared with known data about our planet. We also used the DSCOVR data to simulate phase-angle changes, as well as the minimum data collection rate needed to determine the rotation period of an exoplanet. This innovative method of using the time evolution of a multiwavelength, reflected single-point light source can be deployed for retrieving a range of intrinsic properties of an exoplanet around a distant star.

  5. High-yield exfoliation of tungsten disulphide nanosheets by rational mixing of low-boiling-point solvents

    NASA Astrophysics Data System (ADS)

    Sajedi-Moghaddam, Ali; Saievar-Iranizad, Esmaiel

    2018-01-01

    Developing high-throughput, reliable, and facile approaches for producing atomically thin sheets of transition metal dichalcogenides is of great importance to pave the way for their use in real applications. Here, we report a highly promising route for exfoliating two-dimensional tungsten disulphide sheets by using binary combination of low-boiling-point solvents. Experimental results show significant dependence of exfoliation yield on the type of solvents as well as relative volume fraction of each solvent. The highest yield was found for appropriate combination of isopropanol/water (20 vol% isopropanol and 80 vol% water) which is approximately 7 times higher than that in pure isopropanol and 4 times higher than that in pure water. The dramatic increase in exfoliation yield can be attributed to perfect match between the surface tension of tungsten disulphide and binary solvent system. Furthermore, solvent molecular size also has a profound impact on the exfoliation efficiency, due to the steric repulsion.

  6. A very deep IRAS survey at the north ecliptic pole

    NASA Technical Reports Server (NTRS)

    Houck, J. R.; Hacking, P. B.; Condon, J. J.

    1987-01-01

    The data from approximately 20 hours observation of the 4- to 6-square degree field surrounding the north ecliptic pole have been combined to produce a very deep IR survey at the four IRAS bands. Scans from both pointed and survey observations were included in the data analysis. At 12 and 25 microns the deep survey is limited by detector noise and is approximately 50 times deeper than the IRAS Point Source Catalog (PSC). At 60 microns the problems of source confusion and Galactic cirrus combine to limit the deep survey to approximately 12 times deeper than the PSC. These problems are so severe at 100 microns that flux values are only given for locations corresponding to sources selected at 60 microns. In all, 47 sources were detected at 12 microns, 37 at 25 microns, and 99 at 60 microns. The data-analysis procedures and the significance of the 12- and 60-micron source-count results are discussed.

  7. Slow relaxation of cascade-induced defects in Fe

    DOE PAGES

    Béland, Laurent Karim; Osetsky, Yuri N.; Stoller, Roger E.; ...

    2015-02-17

    On-the-fly kinetic Monte Carlo (KMC) simulations are performed to investigate slow relaxation of non-equilibrium systems. Point defects induced by 25 keV cascades in α -Fe are shown to lead to a characteristic time-evolution, described by the replenish and relax mechanism. Then, we produce an atomistically-based assessment of models proposed to explain the slow structural relaxation by focusing on the aggregation of 50 vacancies and 25 self-interstital atoms (SIA) in 10-lattice-parameter α-Fe boxes, two processes that are closely related to cascade annealing and exhibit similar time signature. Four atomistic effects explain the timescales involved in the evolution: defect concentration heterogeneities, concentration-enhancedmore » mobility, cluster-size dependent bond energies and defect-induced pressure. In conclusion, these findings suggest that the two main classes of models to explain slow structural relaxation, the Eyring model and the Gibbs model, both play a role to limit the rate of relaxation of these simple point-defect systems.« less

  8. APOLLO clock performance and normal point corrections

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Murphy, T. W., Jr.; Colmenares, N. R.; Battat, J. B. R.

    2017-12-01

    The Apache point observatory lunar laser-ranging operation (APOLLO) has produced a large volume of high-quality lunar laser ranging (LLR) data since it began operating in 2006. For most of this period, APOLLO has relied on a GPS-disciplined, high-stability quartz oscillator as its frequency and time standard. The recent addition of a cesium clock as part of a timing calibration system initiated a comparison campaign between the two clocks. This has allowed correction of APOLLO range measurements—called normal points—during the overlap period, but also revealed a mechanism to correct for systematic range offsets due to clock errors in historical APOLLO data. Drift of the GPS clock on  ∼1000 s timescales contributed typically 2.5 mm of range error to APOLLO measurements, and we find that this may be reduced to  ∼1.6 mm on average. We present here a characterization of APOLLO clock errors, the method by which we correct historical data, and the resulting statistics.

  9. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  10. Vendor compliance with Ontario's tobacco point of sale legislation.

    PubMed

    Dubray, Jolene M; Schwartz, Robert M; Garcia, John M; Bondy, Susan J; Victor, J Charles

    2009-01-01

    On May 31, 2006, Ontario joined a small group of international jurisdictions to implement legislative restrictions on tobacco point of sale promotions. This study compares the presence of point of sale promotions in the retail tobacco environment from three surveys: one prior to and two following implementation of the legislation. Approximately 1,575 tobacco vendors were randomly selected for each survey. Each regionally-stratified sample included equal numbers of tobacco vendors categorized into four trade classes: chain convenience, independent convenience and discount, gas stations, and grocery. Data regarding the six restricted point of sale promotions were collected using standardized protocols and inspection forms. Weighted estimates and 95% confidence intervals were produced at the provincial, regional and vendor trade class level using the bootstrap method for estimating variance. At baseline, the proportion of tobacco vendors who did not engage in each of the six restricted point of sale promotions ranged from 41% to 88%. Within four months following implementation of the legislation, compliance with each of the six restricted point of sale promotions exceeded 95%. Similar levels of compliance were observed one year later. Grocery stores had the fewest point of sale promotions displayed at baseline. Compliance rates did not differ across vendor trade classes at either follow-up survey. Point of sale promotions did not differ across regions in any of the three surveys. Within a short period of time, a high level of compliance with six restricted point of sale promotions was achieved.

  11. Parameterizing time in electronic health record studies.

    PubMed

    Hripcsak, George; Albers, David J; Perotte, Adler

    2015-07-01

    Fields like nonlinear physics offer methods for analyzing time series, but many methods require that the time series be stationary-no change in properties over time.Objective Medicine is far from stationary, but the challenge may be able to be ameliorated by reparameterizing time because clinicians tend to measure patients more frequently when they are ill and are more likely to vary. We compared time parameterizations, measuring variability of rate of change and magnitude of change, and looking for homogeneity of bins of temporal separation between pairs of time points. We studied four common laboratory tests drawn from 25 years of electronic health records on 4 million patients. We found that sequence time-that is, simply counting the number of measurements from some start-produced more stationary time series, better explained the variation in values, and had more homogeneous bins than either traditional clock time or a recently proposed intermediate parameterization. Sequence time produced more accurate predictions in a single Gaussian process model experiment. Of the three parameterizations, sequence time appeared to produce the most stationary series, possibly because clinicians adjust their sampling to the acuity of the patient. Parameterizing by sequence time may be applicable to association and clustering experiments on electronic health record data. A limitation of this study is that laboratory data were derived from only one institution. Sequence time appears to be an important potential parameterization. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work properly cited. For commercial re-use, please contact journals.permissions@oup.com.

  12. Topographic Structure from Motion

    NASA Astrophysics Data System (ADS)

    Fonstad, M. A.; Dietrich, J. T.; Courville, B. C.; Jensen, J.; Carbonneau, P.

    2011-12-01

    The production of high-resolution topographic datasets is of increasing concern and application throughout the geomorphic sciences, and river science is no exception. Consequently, a wide range of topographic measurement methods have evolved. Despite the range of available methods, the production of high resolution, high quality digital elevation models (DEMs) generally requires a significant investment in personnel time, hardware and/or software. However, image-based methods such as digital photogrammetry have steadily been decreasing in costs. Initially developed for the purpose of rapid, inexpensive and easy three dimensional surveys of buildings or small objects, the "structure from motion" photogrammetric approach (SfM) is a purely image based method which could deliver a step-change if transferred to river remote sensing, and requires very little training and is extremely inexpensive. Using the online SfM program Microsoft Photosynth, we have created high-resolution digital elevation models (DEM) of rivers from ordinary photographs produced from a multi-step workflow that takes advantage of free and open source software. This process reconstructs real world scenes from SfM algorithms based on the derived positions of the photographs in three-dimensional space. One of the products of the SfM process is a three-dimensional point cloud of features present in the input photographs. This point cloud can be georeferenced from a small number of ground control points collected via GPS in the field. The georeferenced point cloud can then be used to create a variety of digital elevation model products. Among several study sites, we examine the applicability of SfM in the Pedernales River in Texas (USA), where several hundred images taken from a hand-held helikite are used to produce DEMs of the fluvial topographic environment. This test shows that SfM and low-altitude platforms can produce point clouds with point densities considerably better than airborne LiDAR, with horizontal and vertical precision in the centimeter range, and with very low capital and labor costs and low expertise levels. Advanced structure from motion software (such as Bundler and OpenSynther) are currently under development and should increase the density of topographic points rivaling those of terrestrial laser scanning when using images shot from low altitude platforms such as helikites, poles, remote-controlled aircraft and rotocraft, and low-flying manned aircraft. Clearly, the development of this set of inexpensive and low-required-expertise tools has the potential to fundamentally shift the production of digital fluvial topography from a capital-intensive enterprise of a low number of researchers to a low-cost exercise of many river researchers.

  13. Multiple ECG Fiducial Points-Based Random Binary Sequence Generation for Securing Wireless Body Area Networks.

    PubMed

    Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif

    2017-05-01

    Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.

  14. Sick building syndrome (SBS) and exposure to water-damaged buildings: time series study, clinical trial and mechanisms.

    PubMed

    Shoemaker, Ritchie C; House, Dennis E

    2006-01-01

    Occupants of water-damaged buildings (WDBs) with evidence of microbial amplification often describe a syndrome involving multiple organ systems, commonly referred to as "sick building syndrome" (SBS), following chronic exposure to the indoor air. Studies have demonstrated that the indoor air of WDBs often contains a complex mixture of fungi, mycotoxins, bacteria, endotoxins, antigens, lipopolysaccharides, and biologically produced volatile compounds. A case-series study with medical assessments at five time points was conducted to characterize the syndrome after a double-blinded, placebo-controlled clinical trial conducted among a group of study participants investigated the efficacy of cholestyramine (CSM) therapy. The general hypothesis of the time series study was that chronic exposure to the indoor air of WDBs is associated with SBS. Consecutive clinical patients were screened for diagnosis of SBS using criteria of exposure potential, symptoms involving at least five organ systems, and the absence of confounding factors. Twenty-eight cases signed voluntary consent forms for participation in the time-series study and provided samples of microbial contaminants from water-damaged areas in the buildings they occupied. Twenty-six participants with a group-mean duration of illness of 11 months completed examinations at all five study time points. Thirteen of those participants also agreed to complete a double-blinded, placebo-controlled clinical trial. Data from Time Point 1 indicated a group-mean of 23 out of 37 symptoms evaluated; and visual contrast sensitivity (VCS), an indicator of neurological function, was abnormally low in all participants. Measurements of matrix metalloproteinase 9 (MMP9), leptin, alpha melanocyte stimulating hormone (MSH), vascular endothelial growth factor (VEGF), immunoglobulin E (IgE), and pulmonary function were abnormal in 22, 13, 25, 14, 1, and 7 participants, respectively. Following 2 weeks of CSM therapy to enhance toxin elimination rates, measurements at Time Point 2 indicated group-means of 4 symptoms with 65% improvement in VCS at mid-spatial frequency-both statistically significant improvements relative to Time Point 1. Moderate improvements were seen in MMP9, leptin, and VEGF serum levels. The improvements in health status were maintained at Time Point 3 following a 2-week period during which CSM therapy was suspended and the participants avoid re-exposure to the WDBs. Participants reoccupied the respective WDBs for 3 days without CSM therapy, and all participants reported relapse at Time Point 4. The group-mean number of symptoms increased from 4 at Time Point 2 to 15 and VCS at mid-spatial frequency declined by 42%, both statistically significant differences relative to Time Point 2. Statistically significant differences in the group-mean levels of MMP9 and leptin relative to Time Point 2 were also observed. CSM therapy was reinstated for 2 weeks prior to assessments at Time Point 5. Measurements at Time Point 5 indicated group-means of 3 symptoms and a 69% increase in VCS, both results statistically different from those at Time Points 1 and 4. Optically corrected Snellen Distance Equivalent visual acuity scores did not vary significantly over the course of the study. Group-mean levels of MMP9 and leptin showed statistically significant improvement at Time Point 5 relative to Time Points 1 and 4, and the proportion of participants with abnormal VEGF levels was significantly lower at Time Point 5 than at Time Point 1. The number of participants at Time Point 5 with abnormal levels of MMP9, leptin, VEGF, and pulmonary function were 10, 10, 9, and 7, respectively. The level of IgE was not re-measured because of the low incidence of abnormality at Time Point 1, and MSH was not re-measured because previously published data indicated a long time course for MSH improvement. The results from the time series study supported the general study hypothesis that exposure to the indoor air of WDBs is associated with SBS. High levels of MMP9 indicated that exposure to the complex mixture of substances in the indoor air of the WDBs triggered a pro-inflammatory cytokine response. A model describing modes of action along a pathway leading to biotoxin-associated illness is presented to organize current knowledge into testable hypotheses. The model links an inflammatory response with tissue hypoxia, as indicated by abnormal levels of VEGF, and disruption of the proopiomelanocortin pathway in the hypothalamus, as evidenced by abnormalities in leptin and MSH levels. Results from the clinical trial on CSM efficacy indicated highly significant improvement in group-mean number of symptoms and VCS scores relative to baseline in the 7 participants randomly assigned to receive 2 weeks of CSM therapy, but no improvement in the 6 participants assigned placebo therapy during that time interval. However, those 6 participants also showed a highly significant improvement in group-mean number of symptoms and VCS scores relative to baseline following a subsequent 2-week period of CSM therapy. Because the only known benefit of CSM therapy is to enhance the elimination rates of substances that accumulate in bile by preventing re-absorption during enterohepatic re-circulation, results from the clinical trial also supported the general study hypothesis that SBS is associated with exposure to WDBs because the only relevant function of CSM is to bind and remove toxigenic compounds. Only research that focuses on the signs, symptoms, and biochemical markers of patients with persistent illness following acute and/or chronic exposure to WDBs can further the development of the model describing modes of action in the biotoxin-associated pathway and guide the development of innovative and efficacious therapeutic interventions.

  15. Does the Sun Have a Full-Time Chromosphere?

    NASA Astrophysics Data System (ADS)

    Kalkofen, Wolfgang; Ulmschneider, Peter; Avrett, Eugene H.

    1999-08-01

    The successful modeling of the dynamics of H2v bright points in the nonmagnetic chromosphere by Carlsson & Stein gave as a by-product a part-time chromosphere lacking the persistent outward temperature increase of time-average empirical models, which is needed to explain observations of UV emission lines and continua. We discuss the failure of the dynamical model to account for most of the observed chromospheric emission, arguing that their model uses only about 1% of the acoustic energy supplied to the medium. Chromospheric heating requires an additional source of energy in the form of acoustic waves of short period (P<2 minutes), which form shocks and produce the persistent outward temperature increase that can account for the UV emission lines and continua.

  16. The oculometer - A new approach to flight management research.

    NASA Technical Reports Server (NTRS)

    Spady, A. A., Jr.; Waller, M. C.

    1973-01-01

    For the first time researchers have an operational, nonintrusive instrument for determining a pilot's eye-point-of-regard without encumbering the pilot or introducing other artifacts into the simulation of flight experience. The instrument (the oculometer developed for NASA by Honeywell, Inc.) produces data in a form appropriate for online monitoring and rapid analysis using state-of-the-art display and computer technology. The type and accuracy of data obtained and the potential use of the oculometer as a research and training tool will be discussed.

  17. Demonstration of Regenerable, Large-Scale Ion Exchange System Using WBA Resin in Rialto, CA

    DTIC Science & Technology

    2012-12-01

    requirements. The system also has the flexibility to manually modify system parameters such as flow rates, pH set points, time cycles, etc. The system... flexibility to produce soda ash solutions that vary in concentration from 1 to 10% dry soda ash. The packaged soda ash system was engineered and...The dry soda ash was conveyed to a storage hopper (39.5 ft3) using a flexible screw conveyer. Soda ash solutions were prepared in a 100 gallon

  18. Symmetry dependence of holograms for optical trapping

    NASA Astrophysics Data System (ADS)

    Curtis, Jennifer E.; Schmitz, Christian H. J.; Spatz, Joachim P.

    2005-08-01

    No iterative algorithm is necessary to calculate holograms for most holographic optical trapping patterns. Instead, holograms may be produced by a simple extension of the prisms-and-lenses method. This formulaic approach yields the same diffraction efficiency as iterative algorithms for any asymmetric or symmetric but nonperiodic pattern of points while requiring less calculation time. A slight spatial disordering of periodic patterns significantly reduces intensity variations between the different traps without extra calculation costs. Eliminating laborious hologram calculations should greatly facilitate interactive holographic trapping.

  19. Flight Planning

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Seagull Technology, Inc., Sunnyvale, CA, produced a computer program under a Langley Research Center Small Business Innovation Research (SBIR) grant called STAFPLAN (Seagull Technology Advanced Flight Plan) that plans optimal trajectory routes for small to medium sized airlines to minimize direct operating costs while complying with various airline operating constraints. STAFPLAN incorporates four input databases, weather, route data, aircraft performance, and flight-specific information (times, payload, crew, fuel cost) to provide the correct amount of fuel optimal cruise altitude, climb and descent points, optimal cruise speed, and flight path.

  20. Time-resolved EPR study on the photochemical reactions of benzil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukai, Masahiro; Yamnauchi, Seigo; Hirota, Noboru

    1992-04-16

    TREPR and optical studies on the photochemical reactions of benzil in 2-propanol and benzene-TEA conclude that emissive signals are due to the reaction from T{sub n} produced via the S{sub n} pointing right T{sub n} intersystem crossing process. The free-pair radical-pair mechanism can account for the main features of the slow rise component of the chemically induced dynamic electron polarization signal of the ketyl radical in 2-propanol. 27 refs., 10 figs., 2 tabs.

  1. Large-Scale Coronal Heating from "Cool" Activity in the Solar Magnetic Network

    NASA Technical Reports Server (NTRS)

    Falconer, D. A.; Moore, R. L.; Porter, J. G.; Hathaway, D. H.

    1999-01-01

    In Fe XII images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi-supergranular (large-scale corona). In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. Taken together, the coronal network emission and bright point emission are only about 5% of the entire quiet solar coronal Fe XII emission. Here we investigate the relationship between the large-scale corona and the network as seen in three different EIT filters (He II, Fe IX-X, and Fe XII). Using the median-brightness contour, we divide the large-scale Fe XII corona into dim and bright halves, and find that the bright-half/dim half brightness ratio is about 1.5. We also find that the bright half relative to the dim half has 10 times greater total bright point Fe XII emission, 3 times greater Fe XII network emission, 2 times greater Fe IX-X network emission, 1.3 times greater He II network emission, and has 1.5 times more magnetic flux. Also, the cooler network (He II) radiates an order of magnitude more energy than the hotter coronal network (Fe IX-X, and Fe XII). From these results we infer that: 1) The heating of the network and the heating of the large-scale corona each increase roughly linearly with the underlying magnetic flux. 2) The production of network coronal bright points and heating of the coronal network each increase nonlinearly with the magnetic flux. 3) The heating of the large-scale corona is driven by widespread cooler network activity rather than by the exceptional network activity that produces the network coronal bright points and the coronal network. 4) The large-scale corona is heated by a nonthermal process since the driver of its heating is cooler than it is. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.

  2. Compensation for unfavorable characteristics of irregular individual shift rotas.

    PubMed

    Knauth, Peter; Jung, Detlev; Bopp, Winfried; Gauderer, Patric C; Gissel, Andreas

    2006-01-01

    Some employees of TV companies, such as those who produce remote TV programs, have to cope with very irregular rotas and many short-term schedule deviations. Many of these employees complain about the negative effects of such on their wellbeing and private life. Therefore, a working group of employers, council representatives, and researchers developed a so-called bonus system. Based on the criteria of the BESIAK system, the following list of criteria for the ergonomic assessment of irregular shift systems was developed: proportion of night hours worked between 22 : 00 and 01 : 00 h and between 06 : 00 and 07 : 00 h, proportion of night hours worked between 01 : 00 and 06 : 00 h, number of successive night shifts, number of successive working days, number of shifts longer than 9 h, proportion of phase advances, off hours on weekends, work hours between 17 : 00 and 23 : 00 h from Monday to Friday, number of working days with leisure time at remote places, and sudden deviations from the planned shift rota. Each individual rota was evaluated in retrospect. If pre-defined thresholds of criteria were surpassed, bonus points were added to the worker's account. In general, more bonus points add up to more free time. Only in particular cases was monetary compensation possible for some criteria. The bonus point system, which was implemented in the year 2002 for about 850 employees of the TV company, has the advantages of more transparency concerning the unfavorable characteristics of working-time arrangements, incentive for superiors to design "good" rosters that avoid the bonus point thresholds (to reduce costs), positive short-term effects on the employee social life, and expected positive long-term effects on the employee health. In general, the most promising approach to cope with the problems of shift workers in irregular and flexible shift systems seems to be to increase their influence on the arrangement of working times. If this is not possible, bonus point systems may help to achieve greater transparency and fairness in the distribution of unfavorable working-time arrangements within a team, and even reduce the unnecessary unfavorable aspects of shift systems.

  3. Calibration curves for commercial copper and aluminum alloys using handheld laser-induced breakdown spectroscopy

    DOE PAGES

    Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; ...

    2018-02-13

    Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less

  4. Calibration curves for commercial copper and aluminum alloys using handheld laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; Garlea, E.

    2018-03-01

    Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising copper and aluminum alloys and data were collected from the samples' surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectra were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument's ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in situ, as a starting point for undertaking future complex material characterization work.

  5. Calibration curves for commercial copper and aluminum alloys using handheld laser-induced breakdown spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, B. N.; Martin, M. Z.; Leonard, D. N.

    Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less

  6. Influence of stage of lactation and year season on composition of mares' colostrum and milk and method and time of storage on vitamin C content in mares' milk.

    PubMed

    Markiewicz-Kęszycka, Maria; Czyżak-Runowska, Grażyna; Wójtowski, Jacek; Jóźwik, Artur; Pankiewicz, Radosław; Łęska, Bogusława; Krzyżewski, Józef; Strzałkowska, Nina; Marchewka, Joanna; Bagnicka, Emilia

    2015-08-30

    Mares' milk is becoming increasingly popular in Western Europe. This study was thus aimed at investigating the impact of stage of lactation and season on chemical composition, somatic cell count and some physicochemical parameters of mares' colostrum and milk, and at developing a method for the determination of vitamin C (ascorbic acid) in mares' milk and to determine its content in fresh and stored milk. The analysis conducted showed an effect of the stage of lactation on contents of selected chemical components and physicochemical parameters of mares' milk. In successive lactation periods levels of fat, cholesterol, energy value, citric acid and titratable acidity decreased, whereas levels of lactose and vitamin C, as well as the freezing point, increased. Analysis showed that milk produced in autumn (September, October, November) had a higher freezing point and lower concentrations of total solids, protein, fat, cholesterol, citric acid and energy value in comparison to milk produced in summer (June, July, August). Mares' milk was characterised by low somatic cell count throughout lactation. In terms of vitamin C stability the most advantageous method of milk storage was 6-month storage of lyophilised milk. In general, the results confirmed that mares' milk is a raw material with a unique chemical composition different from that produced by other farm animals. © 2014 Society of Chemical Industry.

  7. On the tip of the tongue: learning typing and pointing with an intra-oral computer interface.

    PubMed

    Caltenco, Héctor A; Breidegard, Björn; Struijk, Lotte N S Andreasen

    2014-07-01

    To evaluate typing and pointing performance and improvement over time of four able-bodied participants using an intra-oral tongue-computer interface for computer control. A physically disabled individual may lack the ability to efficiently control standard computer input devices. There have been several efforts to produce and evaluate interfaces that provide individuals with physical disabilities the possibility to control personal computers. Training with the intra-oral tongue-computer interface was performed by playing games over 18 sessions. Skill improvement was measured through typing and pointing exercises at the end of each training session. Typing throughput improved from averages of 2.36 to 5.43 correct words per minute. Pointing throughput improved from averages of 0.47 to 0.85 bits/s. Target tracking performance, measured as relative time on target, improved from averages of 36% to 47%. Path following throughput improved from averages of 0.31 to 0.83 bits/s and decreased to 0.53 bits/s with more difficult tasks. Learning curves support the notion that the tongue can rapidly learn novel motor tasks. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, which makes the tongue a feasible input organ for computer control. Intra-oral computer interfaces could provide individuals with severe upper-limb mobility impairments the opportunity to control computers and automatic equipment. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, but does not cause fatigue easily and might be invisible to other people, which is highly prioritized by assistive device users. Combination of visual and auditory feedback is vital for a good performance of an intra-oral computer interface and helps to reduce involuntary or erroneous activations.

  8. Comparative Effectiveness of Tai Chi Versus Physical Therapy for Knee Osteoarthritis: A Randomized Trial.

    PubMed

    Wang, Chenchen; Schmid, Christopher H; Iversen, Maura D; Harvey, William F; Fielding, Roger A; Driban, Jeffrey B; Price, Lori Lyn; Wong, John B; Reid, Kieran F; Rones, Ramel; McAlindon, Timothy

    2016-07-19

    Few remedies effectively treat long-term pain and disability from knee osteoarthritis. Studies suggest that Tai Chi alleviates symptoms, but no trials have directly compared Tai Chi with standard therapies for osteoarthritis. To compare Tai Chi with standard physical therapy for patients with knee osteoarthritis. Randomized, 52-week, single-blind comparative effectiveness trial. (ClinicalTrials.gov: NCT01258985). An urban tertiary care academic hospital. 204 participants with symptomatic knee osteoarthritis (mean age, 60 years; 70% women; 53% white). Tai Chi (2 times per week for 12 weeks) or standard physical therapy (2 times per week for 6 weeks, followed by 6 weeks of monitored home exercise). The primary outcome was Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) score at 12 weeks. Secondary outcomes included physical function, depression, medication use, and quality of life. At 12 weeks, the WOMAC score was substantially reduced in both groups (Tai Chi, 167 points [95% CI, 145 to 190 points]; physical therapy, 143 points [CI, 119 to 167 points]). The between-group difference was not significant (24 points [CI, -10 to 58 points]). Both groups also showed similar clinically significant improvement in most secondary outcomes, and the benefits were maintained up to 52 weeks. Of note, the Tai Chi group had significantly greater improvements in depression and the physical component of quality of life. The benefit of Tai Chi was consistent across instructors. No serious adverse events occurred. Patients were aware of their treatment group assignment, and the generalizability of the findings to other settings remains undetermined. Tai Chi produced beneficial effects similar to those of a standard course of physical therapy in the treatment of knee osteoarthritis. National Center for Complementary and Integrative Health of the National Institutes of Health.

  9. Progress on the CWU READI Analysis Center

    NASA Astrophysics Data System (ADS)

    Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C.

    2015-12-01

    Real-time GPS position streams are desirable for a variety of seismic monitoring and hazard mitigation applications. We report on progress in our development of a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone. This system is based on 1 Hz point position estimates computed in the ITRF08 reference frame. Convergence from phase and range observables to point position estimates is accelerated using a Kalman filter based, on-line stream editor that produces independent estimations of carrier phase integer biases and other parameters. Positions are then estimated using a short-arc approach and algorithms from JPL's GIPSY-OASIS software with satellite clock and orbit products from the International GNSS Service (IGS). The resulting positions show typical RMS scatter of 2.5 cm in the horizontal and 5 cm in the vertical with latencies below 2 seconds. To facilitate the use of these point position streams for applications such as seismic monitoring, we broadcast real-time positions and covariances using custom-built aggregation-distribution software based on RabbitMQ messaging platform. This software is capable of buffering 24-hour streams for hundreds of stations and providing them through a REST-ful web interface. To demonstrate the power of this approach, we have developed a Java-based front-end that provides a real-time visual display of time-series, displacement vector fields, and map-view, contoured, peak ground displacement. This Java-based front-end is available for download through the PANGA website. We are currently analyzing 80 PBO and PANGA stations along the Cascadia margin and gearing up to process all 400+ real-time stations that are operating in the Pacific Northwest, many of which are currently telemetered in real-time to CWU. These will serve as milestones towards our over-arching goal of extending our processing to include all of the available real-time streams from the Pacific rim. In addition, we have developed a Kalman filter to combine CWU real-time PPP solutions with those from Scripps Institute of Oceanography's PPP-AR real-time solutions as well as real-time solutions from the USGS. These combined products should improve the robustness and reliability of real-time point-position streams in the near future.

  10. Rapid, decimeter-resolution fault zone topography mapped with Structure from Motion

    NASA Astrophysics Data System (ADS)

    Johnson, K. L.; Nissen, E.; Saripalli, S.; Arrowsmith, R.; McGarey, P.; Scharer, K. M.; Williams, P. L.

    2013-12-01

    Recent advances in the generation of high-resolution topography have revolutionized our ability to detect subtle geomorphic features related to ground-rupturing earthquakes. Currently, the most popular topographic mapping methods are airborne Light Detection And Ranging (LiDAR) and terrestrial laser scanning (TLS). Though powerful, these laser scanning methods have some inherent drawbacks: airborne LiDAR is expensive and can be logistically complicated, while TLS is time consuming even for small field sites and suffers from patchy coverage due to its restricted field-of-view. An alternative mapping technique, called Structure from Motion (SfM), builds upon traditional photogrammetry to reproduce the topography and texture of a scene from photographs taken at varying viewpoints. The improved availability of cheap, unmanned aerial vehicles (UAVs) as camera platforms further expedites data collection by covering large areas efficiently with optimal camera angles. Here, we introduce a simple and affordable UAV- or balloon-based SfM mapping system which can produce dense point clouds and sub-decimeter resolution digital elevation models (DEMs) registered to geospatial coordinates using either the photograph's GPS tags or a few ground control points across the scene. The system is ideally suited for studying ruptures of prehistoric, historic, and modern earthquakes in areas of sparse or low-lying vegetation. We use two sites from southern California faults to illustrate. The first is the ~0.1 km2 Washington Street site, located on the Banning strand of the San Andreas fault near Thousand Palms. A high-resolution DEM with ~700 point/m2 was produced from 230 photos collected on a balloon platform flying at 50 m above the ground. The second site is the Galway Lake Road site, which spans a ~1 km strip of the 1992 Mw 7.3 Landers earthquake on the Emerson Fault. The 100 point/m2 DEM was produced from 267 photos taken with a balloon platform at a height of 60 m above the ground. We compare our SfM results to existing airborne LiDAR or TLS datasets. Each SfM survey required less than 2 hours for setup and data collection, an allotment much lower than that required for TLS data collection, given the size of the sites. Processing time is somewhat slower, but depends on the quality of the DEM desired and is almost fully automated. The SfM point cloud densities we present are comparable to TLS but exceed the density of most airborne LiDAR and the orthophotos (texture maps) from the SfM are valuable complements to the DEMs. The SfM topography illuminates features along the faults that can be used to measure offsets from past ruptures, offering the potential to enhance regional seismic hazard analyses.

  11. Brightness and magnetic evolution of solar coronal bright points

    NASA Astrophysics Data System (ADS)

    Ugarte-Urra, I.

    2004-12-01

    This thesis presents a study of the brightness and magnetic evolution of several Extreme ultraviolet (EUV) coronal bright points (hereafter BPs). BPs are loop-like features of enhanced emission in the coronal EUV and X-ray images of the Sun, that are associated to the interaction of opposite photospheric magnetic polarities with magnetic fluxes of ≈1018 - 1019 Mx. The study was carried out using several instruments on board the Solar and Heliospheric Observatory (SOHO): the Extreme Ultraviolet Imager (EIT), the Coronal Diagnostic Spectrometer (CDS) and the Michelson Doppler Imager (MDI), supported by the high resolution imaging from the Transition Region And Coronal Explorer (TRACE). The results confirm that, down to 1'' (i.e. ~715 km) resolution, BPs are made of small loops with lengths of ~6 Mm and cross-sections of ~2 Mm. The loops are very dynamic, evolving in time scales as short as 1 - 2 minutes. This is reflected in a highly variable EUV response with fluctuations highly correlated in spectral lines at transition region temperatures (in the range 3.2x10^4 - 3.5x10^5 K), but not always at coronal temperatures. A wavelet analysis of the intensity variations reveals, for the first time, the existence of quasi-periodic oscillations with periods ranging 400 -- 1000 s, in the range of periods characteristic of the chromospheric network. The link between BPs and network bright points is discussed, as well as the interpretation of the oscillations in terms of global acoustic modes of closed magnetic structures. A comparison of the magnetic flux evolution of the magnetic polarities to the EUV flux changes is also presented. Throughout their lifetime, the intrinsic EUV emission of BPs is found to be dependent on the total magnetic flux of the polarities. In short time scales, co-spatial and co-temporal TRACE and MDI images, reveal the signature of heating events that produce sudden EUV brightenings simultaneous to magnetic flux cancellations. This is interpreted in terms of magnetic reconnection events. Finally, a electron density study of six coronal bright points produces values of ~1.6x109 cm-3, closer to active region plasma than to quiet Sun. The analysis of a large coronal loop (half length of 72 Mm) introduces the discussion on the prospects of future plasma diagnostics of BPs with forthcoming solar missions like Solar-B.

  12. New generation NMR bioreactor coupled with high-resolution NMR spectroscopy leads to novel discoveries in Moorella thermoaceticum metabolic profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, Junfeng; Isern, Nancy G.; Ewing, R James

    An in-situ nuclear magnetic resonance (NMR) bioreactor was developed and employed to monitor microbial metabolism under batch-growth conditions in real time. We selected Moorella thermoacetica ATCC 49707 as a test case. M. thermoacetica (formerly Clostridium thermoaceticum) is a strictly anaerobic, thermophilic, acetogenic, gram-positive bacterium with potential for industrial production of chemicals. The metabolic profiles of M. thermoacetica were characterized during growth in batch mode on xylose (a component of lignocellulosic biomass) using the new generation NMR bioreactor in combination with high-resolution, high sensitivity NMR (HR-NMR) spectroscopy. In-situ NMR measurements were performed using water-suppressed H-1 NMR spectroscopy at an NMR frequencymore » of 500 MHz, and aliquots of the bioreactor contents were taken for 600 MHz HR-NMR spectroscopy at specific intervals to confirm metabolite identifications and expand metabolite coverage. M. thermoacetica demonstrated the metabolic potential to produce formate, ethanol and methanol from xylose, in addition to its known capability of producing acetic acid. Real-time monitoring of bioreactor conditions showed a temporary pH decrease, with a concomitant increase in formic acid during exponential growth. Fermentation experiments performed outside of the magnet showed that the strong magnetic field employed for NMR detection did not significantly affect cell metabolism. Use of the in-situ NMR bioreactor facilitated monitoring of the fermentation process in real time, enabling identification of intermediate and end-point metabolites and their correlation with pH and biomass produced during culture growth. Real-time monitoring of culture metabolism using the NMR bioreactor in combination with the HR-NMR spectroscopy will allow optimization of the metabolism of microorganisms producing valuable bioproducts.« less

  13. An ultrasound-assisted system for the optimization of biodiesel production from chicken fat oil using a genetic algorithm and response surface methodology.

    PubMed

    Fayyazi, E; Ghobadian, B; Najafi, G; Hosseinzadeh, B; Mamat, R; Hosseinzadeh, J

    2015-09-01

    Biodiesel is a green (clean), renewable energy source and is an alternative for diesel fuel. Biodiesel can be produced from vegetable oil, animal fat and waste cooking oil or fat. Fats and oils react with alcohol to produce methyl ester, which is generally known as biodiesel. Because vegetable oil and animal fat wastes are cheaper, the tendency to produce biodiesel from these materials is increasing. In this research, the effect of some parameters such as the alcohol-to-oil molar ratio (4:1, 6:1, 8:1), the catalyst concentration (0.75%, 1% and 1.25% w/w) and the time for the transesterification reaction using ultrasonication on the rate of the fatty acids-to-methyl ester (biodiesel) conversion percentage have been studied (3, 6 and 9 min). In biodiesel production from chicken fat, when increasing the catalyst concentration up to 1%, the oil-to-biodiesel conversion percentage was first increased and then decreased. Upon increasing the molar ratio from 4:1 to 6:1 and then to 8:1, the oil-to-biodiesel conversion percentage increased by 21.9% and then 22.8%, respectively. The optimal point is determined by response surface methodology (RSM) and genetic algorithms (GAs). The biodiesel production from chicken fat by ultrasonic waves with a 1% w/w catalyst percentage, 7:1 alcohol-to-oil molar ratio and 9 min reaction time was equal to 94.8%. For biodiesel that was produced by ultrasonic waves under a similar conversion percentage condition compared to the conventional method, the reaction time was decreased by approximately 87.5%. The time reduction for the ultrasonic method compared to the conventional method makes the ultrasonic method superior. Copyright © 2015. Published by Elsevier B.V.

  14. A Morphometric Assessment of the Intended Function of Cached Clovis Points

    PubMed Central

    Buchanan, Briggs; Kilby, J. David; Huckell, Bruce B.; O'Brien, Michael J.; Collard, Mark

    2012-01-01

    A number of functions have been proposed for cached Clovis points. The least complicated hypothesis is that they were intended to arm hunting weapons. It has also been argued that they were produced for use in rituals or in connection with costly signaling displays. Lastly, it has been suggested that some cached Clovis points may have been used as saws. Here we report a study in which we morphometrically compared Clovis points from caches with Clovis points recovered from kill and camp sites to test two predictions of the hypothesis that cached Clovis points were intended to arm hunting weapons: 1) cached points should be the same shape as, but generally larger than, points from kill/camp sites, and 2) cached points and points from kill/camp sites should follow the same allometric trajectory. The results of the analyses are consistent with both predictions and therefore support the hypothesis. A follow-up review of the fit between the results of the analyses and the predictions of the other hypotheses indicates that the analyses support only the hunting equipment hypothesis. We conclude from this that cached Clovis points were likely produced with the intention of using them to arm hunting weapons. PMID:22348012

  15. Transmembrane myosin chitin synthase involved in mollusc shell formation produced in Dictyostelium is active

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoenitzer, Veronika; Universitaet Regensburg, Biochemie I, Universitaetsstrasse 31, D-93053 Regensburg; Eichner, Norbert

    Highlights: Black-Right-Pointing-Pointer Dictyostelium produces the 264 kDa myosin chitin synthase of bivalve mollusc Atrina. Black-Right-Pointing-Pointer Chitin synthase activity releases chitin, partly associated with the cell surface. Black-Right-Pointing-Pointer Membrane extracts of transgenic slime molds produce radiolabeled chitin in vitro. Black-Right-Pointing-Pointer Chitin producing Dictyostelium cells can be characterized by atomic force microscopy. Black-Right-Pointing-Pointer This model system enables us to study initial processes of chitin biomineralization. -- Abstract: Several mollusc shells contain chitin, which is formed by a transmembrane myosin motor enzyme. This protein could be involved in sensing mechanical and structural changes of the forming, mineralizing extracellular matrix. Here we report themore » heterologous expression of the transmembrane myosin chitin synthase Ar-CS1 of the bivalve mollusc Atrina rigida (2286 amino acid residues, M.W. 264 kDa/monomer) in Dictyostelium discoideum, a model organism for myosin motor proteins. Confocal laser scanning immunofluorescence microscopy (CLSM), chitin binding GFP detection of chitin on cells and released to the cell culture medium, and a radiochemical activity assay of membrane extracts revealed expression and enzymatic activity of the mollusc chitin synthase in transgenic slime mold cells. First high-resolution atomic force microscopy (AFM) images of Ar-CS1 transformed cellulose synthase deficient D. discoideumdcsA{sup -} cell lines are shown.« less

  16. 7 CFR 1212.19 - Producer.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., PROMOTION, CONSUMER EDUCATION AND INDUSTRY INFORMATION ORDER Honey Packers and Importers Research, Promotion, Consumer Education, and Industry Information Order Definitions § 1212.19 Producer. “Producer” means any... producing, or causing to be produced, honey beyond personal use and having value at first point of sale. ...

  17. A composite lithology log while drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tannenbaum, E.; Sutcliffe, B.; Franks, A.

    A new method for producing a computerized composite lithology log (CLL) while drilling by integrating MWD (measurement while drilling) and surface data is described. At present, lithology logs are produced at the well site by mud loggers. They provide basic description and relative amounts of lithologies. Major difficulties are encountered in relating the cuttings to their original formations due to mixing in the drilling mud while transporting to the surface, sloughing shales, flawed sampling, etc. This results in a poor control on the stratigraphic sequence and on the depth of formation boundaries. A composite log can be produced after drillingmore » this additional inputs such as wireline, petrography, and paleontology. This process is labor intensive and expensive. The CLL integrates three types of data (MWD mechanical, MWD geophysical, and surface cuttings) acquired during drilling, in three time stages: (1) Real Time. MWD drilling mechanical data including the rate of penetration and the downhole torque. This stage would provide bed boundaries and some inferred lithology. This would assist the driller with immediate drilling decisions and determine formation tops for coring, casing point, and correlation. (2) MWD Time. Recomputation of the above by adding MWD geophysical data (gamma-ray, resistivity, neutron-density). This stage would upgrade the lithology inference, and give higher resolution to bed boundaries, (3) Lag Time. Detailed analysis of surface cuttings to confirm the inferred lithologies. This last input results in a high-quality CLL with accurate lithologies and bed boundaries.« less

  18. Book Review: Reiner Salzer and Heinz W. Siesler (Eds.): Infrared and Raman spectroscopic imaging, 2nd ed.

    DOE PAGES

    Moore, David Steven

    2015-05-10

    This second edition of "Infrared and Raman Spectroscopic Imaging" propels practitioners in that wide-ranging field, as well as other readers, to the current state of the art in a well-produced and full-color, completely revised and updated, volume. This new edition chronicles the expanded application of vibrational spectroscopic imaging from yesterday's time-consuming point-by-point buildup of a hyperspectral image cube, through the improvements afforded by the addition of focal plane arrays and line scan imaging, to methods applicable beyond the diffraction limit, instructs the reader on the improved instrumentation and image and data analysis methods, and expounds on their application to fundamentalmore » biomedical knowledge, food and agricultural surveys, materials science, process and quality control, and many others.« less

  19. Testing the Simple Biosphere model (SiB) using point micrometeorological and biophysical data

    NASA Technical Reports Server (NTRS)

    Sellers, P. J.; Dorman, J. L.

    1987-01-01

    The suitability of the Simple Biosphere (SiB) model of Sellers et al. (1986) for calculation of the surface fluxes for use within general circulation models is assessed. The structure of the SiB model is described, and its performance is evaluated in terms of its ability to realistically and accurately simulate biophysical processes over a number of test sites, including Ruthe (Germany), South Carolina (U.S.), and Central Wales (UK), for which point biophysical and micrometeorological data were available. The model produced simulations of the energy balances of barley, wheat, maize, and Norway Spruce sites over periods ranging from 1 to 40 days. Generally, it was found that the model reproduced time series of latent, sensible, and ground-heat fluxes and surface radiative temperature comparable with the available data.

  20. Microwave Heating of Metal Power Clusters

    NASA Astrophysics Data System (ADS)

    Rybakov, K. I.; Semenov, V. E.; Volkovskaya, I. I.

    2018-01-01

    The results of simulating the rapid microwave heating of spherical clusters of metal particles to the melting point are reported. In the simulation, the cluster is subjected to a plane electromagnetic wave. The cluster size is comparable to the wavelength; the perturbations of the field inside the cluster are accounted for within an effective medium approximation. It is shown that the time of heating in vacuum to the melting point does not exceed 1 s when the electric field strength in the incident wave is about 2 kV/cm at a frequency of 24 GHz or 5 kV/cm at a frequency of 2.45 GHz. The obtained results demonstrate feasibility of using rapid microwave heating for the spheroidization of metal particles with an objective to produce high-quality powders for additive manufacturing technologies.

  1. Optimal solar sail planetocentric trajectories

    NASA Technical Reports Server (NTRS)

    Sackett, L. L.

    1977-01-01

    The analysis of solar sail planetocentric optimal trajectory problem is described. A computer program was produced to calculate optimal trajectories for a limited performance analysis. A square sail model is included and some consideration is given to a heliogyro sail model. Orbit to a subescape point and orbit to orbit transfer are considered. Trajectories about the four inner planets can be calculated and shadowing, oblateness, and solar motion may be included. Equinoctial orbital elements are used to avoid the classical singularities, and the method of averaging is applied to increase computational speed. Solution of the two-point boundary value problem which arises from the application of optimization theory is accomplished with a Newton procedure. Time optimal trajectories are emphasized, but a penalty function has been considered to prevent trajectories which intersect a planet's surface.

  2. A new 4-dimensional imaging system for jaw tracking.

    PubMed

    Lauren, Mark

    2014-01-01

    A non-invasive 4D imaging system that produces high resolution time-based 3D surface data has been developed to capture jaw motion. Fluorescent microspheres are brushed onto both tooth and soft-tissue areas of the upper and lower arches to be imaged. An extraoral hand-held imaging device, operated about 12 cm from the mouth, captures a time-based set of perspective image triplets of the patch areas. Each triplet, containing both upper and lower arch data, is converted to a high-resolution 3D point mesh using photogrammetry, providing the instantaneous relative jaw position. Eight 3D positions per second are captured. Using one of the 3D frames as a reference, a 4D model can be constructed to describe the incremental free body motion of the mandible. The surface data produced by this system can be registered to conventional 3D models of the dentition, allowing them to be animated. Applications include integration into prosthetic CAD and CBCT data.

  3. Micromachined Thermoelectric Sensors and Arrays and Process for Producing

    NASA Technical Reports Server (NTRS)

    Foote, Marc C. (Inventor); Jones, Eric W. (Inventor); Caillat, Thierry (Inventor)

    2000-01-01

    Linear arrays with up to 63 micromachined thermopile infrared detectors on silicon substrates have been constructed and tested. Each detector consists of a suspended silicon nitride membrane with 11 thermocouples of sputtered Bi-Te and Bi-Sb-Te thermoelectric elements films. At room temperature and under vacuum these detectors exhibit response times of 99 ms, zero frequency D* values of 1.4 x 10(exp 9) cmHz(exp 1/2)/W and responsivity values of 1100 V/W when viewing a 1000 K blackbody source. The only measured source of noise above 20 mHz is Johnson noise from the detector resistance. These results represent the best performance reported to date for an array of thermopile detectors. The arrays are well suited for uncooled dispersive point spectrometers. In another embodiment, also with Bi-Te and Bi-Sb-Te thermoelectric materials on micromachined silicon nitride membranes, detector arrays have been produced with D* values as high as 2.2 x 10(exp 9) cm Hz(exp 1/2)/W for 83 ms response times.

  4. Wollaston prism phase-stepping point diffraction interferometer and method

    DOEpatents

    Rushford, Michael C.

    2004-10-12

    A Wollaston prism phase-stepping point diffraction interferometer for testing a test optic. The Wollaston prism shears light into reference and signal beams, and provides phase stepping at increased accuracy by translating the Wollaston prism in a lateral direction with respect to the optical path. The reference beam produced by the Wollaston prism is directed through a pinhole of a diaphragm to produce a perfect spherical reference wave. The spherical reference wave is recombined with the signal beam to produce an interference fringe pattern of greater accuracy.

  5. Biological signatures of dynamic river networks from a coupled landscape evolution and neutral community model

    NASA Astrophysics Data System (ADS)

    Stokes, M.; Perron, J. T.

    2017-12-01

    Freshwater systems host exceptionally species-rich communities whose spatial structure is dictated by the topology of the river networks they inhabit. Over geologic time, river networks are dynamic; drainage basins shrink and grow, and river capture establishes new connections between previously separated regions. It has been hypothesized that these changes in river network structure influence the evolution of life by exchanging and isolating species, perhaps boosting biodiversity in the process. However, no general model exists to predict the evolutionary consequences of landscape change. We couple a neutral community model of freshwater organisms to a landscape evolution model in which the river network undergoes drainage divide migration and repeated river capture. Neutral community models are macro-ecological models that include stochastic speciation and dispersal to produce realistic patterns of biodiversity. We explore the consequences of three modes of speciation - point mutation, time-protracted, and vicariant (geographic) speciation - by tracking patterns of diversity in time and comparing the final result to an equilibrium solution of the neutral model on the final landscape. Under point mutation, a simple model of stochastic and instantaneous speciation, the results are identical to the equilibrium solution and indicate the dominance of the species-area relationship in forming patterns of diversity. The number of species in a basin is proportional to its area, and regional species richness reaches its maximum when drainage area is evenly distributed among sub-basins. Time-protracted speciation is also modeled as a stochastic process, but in order to produce more realistic rates of diversification, speciation is not assumed to be instantaneous. Rather, each new species must persist for a certain amount of time before it is considered to be established. When vicariance (geographic speciation) is included, there is a transient signature of increased regional diversity after river capture. The results indicate that the mode of speciation and the rate of speciation relative to the rate of divide migration determine the evolutionary signature of river capture.

  6. Locating arbitrarily time-dependent sound sources in three dimensional space in real time.

    PubMed

    Wu, Sean F; Zhu, Na

    2010-08-01

    This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.

  7. Origin of acoustic emission produced during single point machining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heiple, C.R,.; Carpenter, S.H.; Armentrout, D.L.

    1991-01-01

    Acoustic emission was monitored during single point, continuous machining of 4340 steel and Ti-6Al-4V as a function of heat treatment. Acoustic emission produced during tensile and compressive deformation of these alloys has been previously characterized as a function of heat treatment. Heat treatments which increase the strength of 4340 steel increase the amount of acoustic emission produced during deformation, while heat treatments which increase the strength of Ti-6Al-4V decrease the amount of acoustic emission produced during deformation. If chip deformation were the primary source of acoustic emission during single point machining, then opposite trends in the level of acoustic emissionmore » produced during machining as a function of material strength would be expected for these two alloys. Trends in rms acoustic emission level with increasing strength were similar for both alloys, demonstrating that chip deformation is not a major source of acoustic emission in single point machining. Acoustic emission has also been monitored as a function of machining parameters on 6061-T6 aluminum, 304 stainless steel, 17-4PH stainless steel, lead, and teflon. The data suggest that sliding friction between the nose and/or flank of the tool and the newly machined surface is the primary source of acoustic emission. Changes in acoustic emission with tool wear were strongly material dependent. 21 refs., 19 figs., 4 tabs.« less

  8. Launching and controlling Gaussian beams from point sources via planar transformation media

    NASA Astrophysics Data System (ADS)

    Odabasi, Hayrettin; Sainath, Kamalesh; Teixeira, Fernando L.

    2018-02-01

    Based on operations prescribed under the paradigm of complex transformation optics (CTO) [F. Teixeira and W. Chew, J. Electromagn. Waves Appl. 13, 665 (1999), 10.1163/156939399X01104; F. L. Teixeira and W. C. Chew, Int. J. Numer. Model. 13, 441 (2000), 10.1002/1099-1204(200009/10)13:5%3C441::AID-JNM376%3E3.0.CO;2-J; H. Odabasi, F. L. Teixeira, and W. C. Chew, J. Opt. Soc. Am. B 28, 1317 (2011), 10.1364/JOSAB.28.001317; B.-I. Popa and S. A. Cummer, Phys. Rev. A 84, 063837 (2011), 10.1103/PhysRevA.84.063837], it was recently shown in [G. Castaldi, S. Savoia, V. Galdi, A. Alù, and N. Engheta, Phys. Rev. Lett. 110, 173901 (2013), 10.1103/PhysRevLett.110.173901] that a complex source point (CSP) can be mimicked by parity-time (PT ) transformation media. Such coordinate transformation has a mirror symmetry for the imaginary part, and results in a balanced loss/gain metamaterial slab. A CSP produces a Gaussian beam and, consequently, a point source placed at the center of such a metamaterial slab produces a Gaussian beam propagating away from the slab. Here, we extend the CTO analysis to nonsymmetric complex coordinate transformations as put forth in [S. Savoia, G. Castaldi, and V. Galdi, J. Opt. 18, 044027 (2016), 10.1088/2040-8978/18/4/044027] and verify that, by using simply a (homogeneous) doubly anisotropic gain-media metamaterial slab, one can still mimic a CSP and produce Gaussian beam. In addition, we show that a Gaussian-like beams can be produced by point sources placed outside the slab as well. By making use of the extra degrees of freedom (the real and imaginary parts of the coordinate transformation) provided by CTO, the near-zero requirement on the real part of the resulting constitutive parameters can be relaxed to facilitate potential realization of Gaussian-like beams. We illustrate how beam properties such as peak amplitude and waist location can be controlled by a proper choice of (complex-valued) CTO Jacobian elements. In particular, the beam waist location may be moved bidirectionally by allowing for negative entries in the Jacobian (equivalent to inducing negative refraction effects). These results are then interpreted in light of the ensuing CSP location.

  9. Time-reversal symmetric resolution of unity without background integrals in open quantum systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatano, Naomichi, E-mail: hatano@iis.u-tokyo.ac.jp; Ordonez, Gonzalo, E-mail: gordonez@butler.edu

    2014-12-15

    We present a new complete set of states for a class of open quantum systems, to be used in expansion of the Green’s function and the time-evolution operator. A remarkable feature of the complete set is that it observes time-reversal symmetry in the sense that it contains decaying states (resonant states) and growing states (anti-resonant states) parallelly. We can thereby pinpoint the occurrence of the breaking of time-reversal symmetry at the choice of whether we solve Schrödinger equation as an initial-condition problem or a terminal-condition problem. Another feature of the complete set is that in the subspace of the centralmore » scattering area of the system, it consists of contributions of all states with point spectra but does not contain any background integrals. In computing the time evolution, we can clearly see contribution of which point spectrum produces which time dependence. In the whole infinite state space, the complete set does contain an integral but it is over unperturbed eigenstates of the environmental area of the system and hence can be calculated analytically. We demonstrate the usefulness of the complete set by computing explicitly the survival probability and the escaping probability as well as the dynamics of wave packets. The origin of each term of matrix elements is clear in our formulation, particularly, the exponential decays due to the resonance poles.« less

  10. Pump-Probe Spectroscopy Using the Hadamard Transform.

    PubMed

    Beddard, Godfrey S; Yorke, Briony A

    2016-08-01

    A new method of performing pump-probe experiments is proposed and experimentally demonstrated by a proof of concept on the millisecond scale. The idea behind this method is to measure the total probe intensity arising from several time points as a group, instead of measuring each time separately. These measurements are multiplexes that are then transformed into the true signal via multiplication with a binary Hadamard S matrix. Each group of probe pulses is determined by using the pattern of a row of the Hadamard S matrix and the experiment is completed by rotating this pattern by one step for each sample excitation until the original pattern is again produced. Thus to measure n time points, n excitation events are needed and n probe patterns each taken from the n × n S matrix. The time resolution is determined by the shortest time between the probe pulses. In principle, this method could be used over all timescales, instead of the conventional pump-probe method which uses delay lines for picosecond and faster time resolution, or fast detectors and oscilloscopes on longer timescales. This new method is particularly suitable for situations where the probe intensity is weak and/or the detector is noisy. When the detector is noisy, there is in principle a signal to noise advantage over conventional pump-probe methods. © The Author(s) 2016.

  11. Method of producing a diesel fuel blend having a pre-determined flash-point and pre-determined increase in cetane number

    DOEpatents

    Waller, Francis Joseph; Quinn, Robert

    2004-07-06

    The present invention relates to a method of producing a diesel fuel blend having a pre-determined flash-point and a pre-determined increase in cetane number over the stock diesel fuel. Upon establishing the desired flash-point and increase in cetane number, an amount of a first oxygenate with a flash-point less than the flash-point of the stock diesel fuel and a cetane number equal to or greater than the cetane number of the stock diesel fuel is added to the stock diesel fuel in an amount sufficient to achieve the pre-determined increase in cetane number. Thereafter, an amount of a second oxygenate with a flash-point equal to or greater than the flash-point of the stock diesel fuel and a cetane number greater than the cetane number of the stock diesel fuel is added to the stock diesel fuel in an amount sufficient to achieve the pre-determined increase in cetane number.

  12. LANDSAT-4 MSS Geometric Correction: Methods and Results

    NASA Technical Reports Server (NTRS)

    Brooks, J.; Kimmer, E.; Su, J.

    1984-01-01

    An automated image registration system such as that developed for LANDSAT-4 can produce all of the information needed to verify and calibrate the software and to evaluate system performance. The on-line MSS archive generation process which upgrades systematic correction data to geodetic correction data is described as well as the control point library build subsystem which generates control point chips and support data for on-line upgrade of correction data. The system performance was evaluated for both temporal and geodetic registration. For temporal registration, 90% errors were computed to be .36 IFOV (instantaneous field of view) = 82.7 meters) cross track, and .29 IFOV along track. Also, for actual production runs monitored, the 90% errors were .29 IFOV cross track and .25 IFOV along track. The system specification is .3 IFOV, 90% of the time, both cross and along track. For geodetic registration performance, the model bias was measured by designating control points in the geodetically corrected imagery.

  13. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  14. Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing

    NASA Technical Reports Server (NTRS)

    Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.

    1995-01-01

    Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.

  15. Proteomic Analysis of Human Adipose Derived Stem Cells during Small Molecule Chemical Stimulated Pre-neuronal Differentiation

    PubMed Central

    Santos, Jerran; Milthorpe, Bruce K; Herbert, Benjamin R; Padula, Matthew P

    2017-01-01

    Background Adipose derived stem cells (ADSCs) are acquired from abdominal liposuction yielding a thousand fold more stem cells per millilitre than those from bone marrow. A large research void exists as to whether ADSCs are capable of transdermal differentiation toward neuronal phenotypes. Previous studies have investigated the use of chemical cocktails with varying inconclusive results. Methods Human ADSCs were treated with a chemical stimulant, beta-mercaptoethanol, to direct them toward a neuronal-like lineage within 24 hours. Quantitative proteomics using iTRAQ was then performed to ascertain protein abundance differences between ADSCs, beta-mercaptoethanol treated ADSCs and a glioblastoma cell line. Results The soluble proteome of ADSCs differentiated for 12 hours and 24 hours was significantly different from basal ADSCs and control cells, expressing a number of remodeling, neuroprotective and neuroproliferative proteins. However toward the later time point presented stress and shock related proteins were observed to be up regulated with a large down regulation of structural proteins. Cytokine profiles support a large cellular remodeling shift as well indicating cellular distress. Conclusion The earlier time point indicates an initiation of differentiation. At the latter time point there is a vast loss of cell population during treatment. At 24 hours drastically decreased cytokine profiles and overexpression of stress proteins reveal that exposure to beta-mercaptoethanol beyond 24 hours may not be suitable for clinical application as our results indicate that the cells are in trauma whilst producing neuronal-like morphologies. The shorter treatment time is promising, indicating a reducing agent has fast acting potential to initiate neuronal differentiation of ADSCs. PMID:28844130

  16. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  17. Proteomic Analysis of Human Adipose Derived Stem Cells during Small Molecule Chemical Stimulated Pre-neuronal Differentiation.

    PubMed

    Santos, Jerran; Milthorpe, Bruce K; Herbert, Benjamin R; Padula, Matthew P

    2017-11-30

    Adipose derived stem cells (ADSCs) are acquired from abdominal liposuction yielding a thousand fold more stem cells per millilitre than those from bone marrow. A large research void exists as to whether ADSCs are capable of transdermal differentiation toward neuronal phenotypes. Previous studies have investigated the use of chemical cocktails with varying inconclusive results. Human ADSCs were treated with a chemical stimulant, beta-mercaptoethanol, to direct them toward a neuronal-like lineage within 24 hours. Quantitative proteomics using iTRAQ was then performed to ascertain protein abundance differences between ADSCs, beta-mercaptoethanol treated ADSCs and a glioblastoma cell line. The soluble proteome of ADSCs differentiated for 12 hours and 24 hours was significantly different from basal ADSCs and control cells, expressing a number of remodeling, neuroprotective and neuroproliferative proteins. However toward the later time point presented stress and shock related proteins were observed to be up regulated with a large down regulation of structural proteins. Cytokine profiles support a large cellular remodeling shift as well indicating cellular distress. The earlier time point indicates an initiation of differentiation. At the latter time point there is a vast loss of cell population during treatment. At 24 hours drastically decreased cytokine profiles and overexpression of stress proteins reveal that exposure to beta-mercaptoethanol beyond 24 hours may not be suitable for clinical application as our results indicate that the cells are in trauma whilst producing neuronal-like morphologies. The shorter treatment time is promising, indicating a reducing agent has fast acting potential to initiate neuronal differentiation of ADSCs.

  18. Changes in dynamic resting state network connectivity following aphasia therapy.

    PubMed

    Duncan, E Susan; Small, Steven L

    2017-10-24

    Resting state magnetic resonance imaging (rsfMRI) permits observation of intrinsic neural networks produced by task-independent correlations in low frequency brain activity. Various resting state networks have been described, with each thought to reflect common engagement in some shared function. There has been limited investigation of the plasticity in these network relationships after stroke or induced by therapy. Twelve individuals with language disorders after stroke (aphasia) were imaged at multiple time points before (baseline) and after an imitation-based aphasia therapy. Language assessment using a narrative production task was performed at the same time points. Group independent component analysis (ICA) was performed on the rsfMRI data to identify resting state networks. A sliding window approach was then applied to assess the dynamic nature of the correlations among these networks. Network correlations during each 30-second window were used to cluster the data into ten states for each window at each time point for each subject. Correlation was performed between changes in time spent in each state and therapeutic gains on the narrative task. The amount of time spent in a single one of the (ten overall) dynamic states was positively associated with behavioral improvement on the narrative task at the 6-week post-therapy maintenance interval, when compared with either baseline or assessment immediately following therapy. This particular state was characterized by minimal correlation among the task-independent resting state networks. Increased functional independence and segregation of resting state networks underlies improvement on a narrative production task following imitation-based aphasia treatment. This has important clinical implications for the targeting of noninvasive brain stimulation in post-stroke remediation.

  19. Heuristic Modeling for TRMM Lifetime Predictions

    NASA Technical Reports Server (NTRS)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  20. Segmentation of time series with long-range fractal correlations

    PubMed Central

    Bernaola-Galván, P.; Oliver, J.L.; Hackenberg, M.; Coronado, A.V.; Ivanov, P.Ch.; Carpena, P.

    2012-01-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome. PMID:23645997

  1. Fundamentals of continuum mechanics – classical approaches and new trends

    NASA Astrophysics Data System (ADS)

    Altenbach, H.

    2018-04-01

    Continuum mechanics is a branch of mechanics that deals with the analysis of the mechanical behavior of materials modeled as a continuous manifold. Continuum mechanics models begin mostly by introducing of three-dimensional Euclidean space. The points within this region are defined as material points with prescribed properties. Each material point is characterized by a position vector which is continuous in time. Thus, the body changes in a way which is realistic, globally invertible at all times and orientation-preserving, so that the body cannot intersect itself and as transformations which produce mirror reflections are not possible in nature. For the mathematical formulation of the model it is also assumed to be twice continuously differentiable, so that differential equations describing the motion may be formulated. Finally, the kinematical relations, the balance equations, the constitutive and evolution equations and the boundary and/or initial conditions should be defined. If the physical fields are non-smooth jump conditions must be taken into account. The basic equations of continuum mechanics are presented following a short introduction. Additionally, some examples of solid deformable continua will be discussed within the presentation. Finally, advanced models of continuum mechanics will be introduced. The paper is dedicated to Alexander Manzhirov’s 60th birthday.

  2. Development of an accurate transmission line fault locator using the global positioning system satellites

    NASA Technical Reports Server (NTRS)

    Lee, Harry

    1994-01-01

    A highly accurate transmission line fault locator based on the traveling-wave principle was developed and successfully operated within B.C. Hydro. A transmission line fault produces a fast-risetime traveling wave at the fault point which propagates along the transmission line. This fault locator system consists of traveling wave detectors located at key substations which detect and time tag the leading edge of the fault-generated traveling wave as if passes through. A master station gathers the time-tagged information from the remote detectors and determines the location of the fault. Precise time is a key element to the success of this system. This fault locator system derives its timing from the Global Positioning System (GPS) satellites. System tests confirmed the accuracy of locating faults to within the design objective of +/-300 meters.

  3. Time utilization, productivity and costs of solo and extended duty auxiliary dental practice.

    PubMed

    Tan, H H; van Gemert, H G

    1977-07-01

    A study was conducted to compare the time utilization of the dentist, and productivity and costs for solo (one dentist, one chairside assistant and one treatment room) and extended duty settings (one dentist, two extended duty dental hygienists, one chairside assistant and two treatment rooms). Only amalgam and composite restorations done in a general group practice were included. In the extended duty setting the dentist spent more time in managerial activities and less time in treatment than in the solo setting. Nevertheless, the dentist in the extended duty setting produced 53% more restorations as compared with solo practice. The cost ratio of solo to extended duty practice was computed to 1:1.52. From the point of view of microeconomics, the extended duty setting was found no worse than the solo setting.

  4. Dem Generation from Close-Range Photogrammetry Using Extended Python Photogrammetry Toolbox

    NASA Astrophysics Data System (ADS)

    Belmonte, A. A.; Biong, M. M. P.; Macatulad, E. G.

    2017-10-01

    Digital elevation models (DEMs) are widely used raster data for different applications concerning terrain, such as for flood modelling, viewshed analysis, mining, land development, engineering design projects, to name a few. DEMs can be obtained through various methods, including topographic survey, LiDAR or photogrammetry, and internet sources. Terrestrial close-range photogrammetry is one of the alternative methods to produce DEMs through the processing of images using photogrammetry software. There are already powerful photogrammetry software that are commercially-available and can produce high-accuracy DEMs. However, this entails corresponding cost. Although, some of these software have free or demo trials, these trials have limits in their usable features and usage time. One alternative is the use of free and open-source software (FOSS), such as the Python Photogrammetry Toolbox (PPT), which provides an interface for performing photogrammetric processes implemented through python script. For relatively small areas such as in mining or construction excavation, a relatively inexpensive, fast and accurate method would be advantageous. In this study, PPT was used to generate 3D point cloud data from images of an open pit excavation. The PPT was extended to add an algorithm converting the generated point cloud data into a usable DEM.

  5. Robust Airfoil Optimization in High Resolution Design Space

    NASA Technical Reports Server (NTRS)

    Li, Wu; Padula, Sharon L.

    2003-01-01

    The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of B-spline control points as design variables yet the resulting airfoil shape is fairly smooth, and (3) it allows the user to make a trade-off between the level of optimization and the amount of computing time consumed. The robust optimization method is demonstrated by solving a lift-constrained drag minimization problem for a two-dimensional airfoil in viscous flow with a large number of geometric design variables. Our experience with robust optimization indicates that our strategy produces reasonable airfoil shapes that are similar to the original airfoils, but these new shapes provide drag reduction over the specified range of Mach numbers. We have tested this strategy on a number of advanced airfoil models produced by knowledgeable aerodynamic design team members and found that our strategy produces airfoils better or equal to any designs produced by traditional design methods.

  6. Temporal heating profile influence on the immediate bond strength following laser tissue soldering.

    PubMed

    Rabi, Yaron; Katzir, Abraham

    2010-07-01

    Bonding of tissues by laser heating is considered as a future alternative to sutures and staples. Increasing the post-operative bond strength remains a challenging issue for laser tissue bonding, especially in organs that have to sustain considerable tension or pressure. In this study, we investigated the influence of different temporal heating profiles on the strength of soldered incisions. The thermal damage following each heating procedure was quantified, in order to assess the effect of each heating profile on the thermal damage. Incisions in porcine bowel tissue strips (1 cmx4 cm) were soldered, using a 44% liquid albumin mixed with indocyanine green and a temperature controlled laser (830 nm) tissue bonding system. Heating was done either with a linear or a step temporal heating profile. The incisions were bonded by soldering at three points, separated by 2 mm. Set-point temperatures of T(set) = 60, 70, 80, 90, 100, 110, 150 degrees C and dwell times of t(d) = 10, 20, 30, 40 seconds were investigated. The bond strength was measured immediately following each soldering by applying a gradually increased tension on the tissue edges until the bond break. Bonds formed by linear heating were stronger than the ones formed by step heating: at T(set) = 80 degrees C the bonds were 40% stronger and at T(set) = 90 degrees C the bonds strength was nearly doubled. The bond strength difference between the heating methods was larger as T(set) increased. Linear heating produced stronger bonds than step heating. The difference in the bond strength was more pronounced at high set-point temperatures and short dwell times. The bond strength could be increased with either higher set-point temperature or a longer dwell time.

  7. Two-dimensional Imaging Velocity Interferometry: Technique and Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erskine, D J; Smith, R F; Bolme, C

    2011-03-23

    We describe the data analysis procedures for an emerging interferometric technique for measuring motion across a two-dimensional image at a moment in time, i.e. a snapshot 2d-VISAR. Velocity interferometers (VISAR) measuring target motion to high precision have been an important diagnostic in shockwave physics for many years Until recently, this diagnostic has been limited to measuring motion at points or lines across a target. We introduce an emerging interferometric technique for measuring motion across a two-dimensional image, which could be called a snapshot 2d-VISAR. If a sufficiently fast movie camera technology existed, it could be placed behind a traditional VISARmore » optical system and record a 2d image vs time. But since that technology is not yet available, we use a CCD detector to record a single 2d image, with the pulsed nature of the illumination providing the time resolution. Consequently, since we are using pulsed illumination having a coherence length shorter than the VISAR interferometer delay ({approx}0.1 ns), we must use the white light velocimetry configuration to produce fringes with significant visibility. In this scheme, two interferometers (illuminating, detecting) having nearly identical delays are used in series, with one before the target and one after. This produces fringes with at most 50% visibility, but otherwise has the same fringe shift per target motion of a traditional VISAR. The 2d-VISAR observes a new world of information about shock behavior not readily accessible by traditional point or 1d-VISARS, simultaneously providing both a velocity map and an 'ordinary' snapshot photograph of the target. The 2d-VISAR has been used to observe nonuniformities in NIF related targets (polycrystalline diamond, Be), and in Si and Al.« less

  8. Montblanc1: GPU accelerated radio interferometer measurement equations in support of Bayesian inference for radio observations

    NASA Astrophysics Data System (ADS)

    Perkins, S. J.; Marais, P. C.; Zwart, J. T. L.; Natarajan, I.; Tasse, C.; Smirnov, O.

    2015-09-01

    We present Montblanc, a GPU implementation of the Radio interferometer measurement equation (RIME) in support of the Bayesian inference for radio observations (BIRO) technique. BIRO uses Bayesian inference to select sky models that best match the visibilities observed by a radio interferometer. To accomplish this, BIRO evaluates the RIME multiple times, varying sky model parameters to produce multiple model visibilities. χ2 values computed from the model and observed visibilities are used as likelihood values to drive the Bayesian sampling process and select the best sky model. As most of the elements of the RIME and χ2 calculation are independent of one another, they are highly amenable to parallel computation. Additionally, Montblanc caters for iterative RIME evaluation to produce multiple χ2 values. Modified model parameters are transferred to the GPU between each iteration. We implemented Montblanc as a Python package based upon NVIDIA's CUDA architecture. As such, it is easy to extend and implement different pipelines. At present, Montblanc supports point and Gaussian morphologies, but is designed for easy addition of new source profiles. Montblanc's RIME implementation is performant: On an NVIDIA K40, it is approximately 250 times faster than MEQTREES on a dual hexacore Intel E5-2620v2 CPU. Compared to the OSKAR simulator's GPU-implemented RIME components it is 7.7 and 12 times faster on the same K40 for single and double-precision floating point respectively. However, OSKAR's RIME implementation is more general than Montblanc's BIRO-tailored RIME. Theoretical analysis of Montblanc's dominant CUDA kernel suggests that it is memory bound. In practice, profiling shows that is balanced between compute and memory, as much of the data required by the problem is retained in L1 and L2 caches.

  9. Ordinary differential equation for local accumulation time.

    PubMed

    Berezhkovskii, Alexander M

    2011-08-21

    Cell differentiation in a developing tissue is controlled by the concentration fields of signaling molecules called morphogens. Formation of these concentration fields can be described by the reaction-diffusion mechanism in which locally produced molecules diffuse through the patterned tissue and are degraded. The formation kinetics at a given point of the patterned tissue can be characterized by the local accumulation time, defined in terms of the local relaxation function. Here, we show that this time satisfies an ordinary differential equation. Using this equation one can straightforwardly determine the local accumulation time, i.e., without preliminary calculation of the relaxation function by solving the partial differential equation, as was done in previous studies. We derive this ordinary differential equation together with the accompanying boundary conditions and demonstrate that the earlier obtained results for the local accumulation time can be recovered by solving this equation. © 2011 American Institute of Physics

  10. Spectral reconstruction analysis for enhancing signal-to-noise in time-resolved spectroscopies

    NASA Astrophysics Data System (ADS)

    Wilhelm, Michael J.; Smith, Jonathan M.; Dai, Hai-Lung

    2015-09-01

    We demonstrate a new spectral analysis for the enhancement of the signal-to-noise ratio (SNR) in time-resolved spectroscopies. Unlike the simple linear average which produces a single representative spectrum with enhanced SNR, this Spectral Reconstruction analysis (SRa) improves the SNR (by a factor of ca. 0 . 6 √{ n } ) for all n experimentally recorded time-resolved spectra. SRa operates by eliminating noise in the temporal domain, thereby attenuating noise in the spectral domain, as follows: Temporal profiles at each measured frequency are fit to a generic mathematical function that best represents the temporal evolution; spectra at each time are then reconstructed with data points from the fitted profiles. The SRa method is validated with simulated control spectral data sets. Finally, we apply SRa to two distinct experimentally measured sets of time-resolved IR emission spectra: (1) UV photolysis of carbonyl cyanide and (2) UV photolysis of vinyl cyanide.

  11. Demonstration of Johnson noise thermometry with all-superconducting quantum voltage noise source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, Takahiro, E-mail: yamada-takahiro@aist.go.jp; Urano, Chiharu; Maezawa, Masaaki

    We present a Johnson noise thermometry (JNT) system based on an integrated quantum voltage noise source (IQVNS) that has been fully implemented using superconducting circuit technology. To enable precise measurement of Boltzmann's constant, an IQVNS chip was designed to produce intrinsically calculable pseudo-white noise to calibrate the JNT system. On-chip real-time generation of pseudo-random codes via simple circuits produced pseudo-voltage noise with a harmonic tone interval of less than 1 Hz, which was one order of magnitude finer than the harmonic tone interval of conventional quantum voltage noise sources. We estimated a value for Boltzmann's constant experimentally by performing JNT measurementsmore » at the temperature of the triple point of water using the IQVNS chip.« less

  12. Construction of Gene Regulatory Networks Using Recurrent Neural Networks and Swarm Intelligence.

    PubMed

    Khan, Abhinandan; Mandal, Sudip; Pal, Rajat Kumar; Saha, Goutam

    2016-01-01

    We have proposed a methodology for the reverse engineering of biologically plausible gene regulatory networks from temporal genetic expression data. We have used established information and the fundamental mathematical theory for this purpose. We have employed the Recurrent Neural Network formalism to extract the underlying dynamics present in the time series expression data accurately. We have introduced a new hybrid swarm intelligence framework for the accurate training of the model parameters. The proposed methodology has been first applied to a small artificial network, and the results obtained suggest that it can produce the best results available in the contemporary literature, to the best of our knowledge. Subsequently, we have implemented our proposed framework on experimental (in vivo) datasets. Finally, we have investigated two medium sized genetic networks (in silico) extracted from GeneNetWeaver, to understand how the proposed algorithm scales up with network size. Additionally, we have implemented our proposed algorithm with half the number of time points. The results indicate that a reduction of 50% in the number of time points does not have an effect on the accuracy of the proposed methodology significantly, with a maximum of just over 15% deterioration in the worst case.

  13. Uncoupling protein-3 lowers reactive oxygen species production in isolated mitochondria

    PubMed Central

    Toime, Laurence J.; Brand, Martin D.

    2010-01-01

    Mitochondria are the major cellular producers of reactive oxygen species (ROS), and mitochondrial ROS production increases steeply with increased protonmotive force. The uncoupling proteins (UCP1, UCP2 and UCP3) and adenine nucleotide translocase induce proton leak in response to exogenously added fatty acids, superoxide or lipid peroxidation products. “Mild uncoupling” by these proteins may provide a negative feedback loop to decrease protonmotive force and attenuate ROS production. Using wild type and Ucp3−/− mice, we found that native UCP3 actively lowers the rate of ROS production in isolated energized skeletal muscle mitochondria, in the absence of exogenous activators. The estimated specific activity of UCP3 in lowering ROS production was 90 to 500 times higher than that of the adenine nucleotide translocase. The mild uncoupling hypothesis was tested by measuring whether the effect of UCP3 on ROS production could be mimicked by chemical uncoupling. A chemical uncoupler mimicked the effect of UCP3 at early time points after mitochondrial energization, in support of the mild uncoupling hypothesis. However, at later time points the uncoupler did not mimic UCP3, suggesting that UCP3 can also affect on ROS production through a membrane potential-independent mechanism. PMID:20493945

  14. Antagonism of detomidine sedation in the horse using intravenous tolazoline or atipamezole.

    PubMed

    Hubbell, J A E; Muir, W W

    2006-05-01

    The ability to shorten the duration of sedation would potentially improve safety and utility of detomidine. To determine the effects of tolazoline and atipamezole after detomidine sedation. Administration of tolazoline or atipamezole would not affect detomidine sedation. In a randomised, placebo-controlled, double-blind, descriptive study, detomidine (0.02 mg/kg bwt i.v.) was administered to 6 mature horses on 4 separate occasions. Twenty-five mins later, each horse received one of 4 treatments: Group 1 saline (0.9% i.v.) as a placebo control; Group 2 atipamezole (0.05 mg/kg bwt i.v.); Group 3 atipamezole (0.1 mg/kg bwt i.v.); and Group 4 tolazoline (4.0 mg/kg bwt i.v.). Sedation, muscle relaxation and ataxia were scored by 3 independent observers at 9 time points. Horses were led through an obstacle course at 7 time points. Course completion time was recorded and the ability of the horse to traverse the course was scored by 3 independent observers. Horses were videotaped before, during and after each trip through the obstacle course. Atipamezole and tolazoline administration incompletely antagonised the effects of detomidine, but the time course to recovery was shortened. Single bolus administration of atipamezole or tolazoline produced partial reversal of detomidine sedation and may be useful for minimising detomidine sedation.

  15. Laser heterodyne surface profiler

    DOEpatents

    Sommargren, G.E.

    1980-06-16

    A method and apparatus are disclosed for testing the deviation of the face of an object from a flat smooth surface using a beam of coherent light of two plane-polarized components, one of a frequency constantly greater than the other by a fixed amount to produce a difference frequency with a constant phase to be used as a reference, and splitting the beam into its two components. The separate components are directed onto spaced apart points on the face of the object to be tested for smoothness while the face of the object is rotated on an axis normal to one point, thereby passing the other component over a circular track on the face of the object. The two components are recombined after reflection to produce a reflected frequency difference of a phase proportional to the difference in path length of one component reflected from one point to the other component reflected from the other point. The phase of the reflected frequency difference is compared with the reference phase to produce a signal proportional to the deviation of the height of the surface along the circular track with respect to the fixed point at the center, thereby to produce a signal that is plotted as a profile of the surface along the circular track. The phase detector includes a quarter-wave plate to convert the components of the reference beam into circularly polarized components, a half-wave plate to shift the phase of the circularly polarized components, and a polarizer to produce a signal of a shifted phase for comparison with the phase of the frequency difference of the reflected components detected through a second polarizer. Rotation of the half-wave plate can be used for phase adjustment over a full 360/sup 0/ range.

  16. Utility of the T-SPOT®.TB test's borderline category to increase test resolution for results around the cut-off point.

    PubMed

    Rego, Karen; Pereira, Kristen; MacDougall, James; Cruikshank, William

    2018-01-01

    Accurate identification of individuals with TB infection, is required to achieve the WHO's End TB Strategy goals. While there is general acceptance that the T-SPOT.TB test borderline category provides an opportunity to increase test resolution of results around the test cut-off point, this has not been investigated. 645,947 tests were analyzed to determine frequency of borderline results, effect of age and time between tests and associations between subjects' clinical risk factors and retest results. 645,947 tests produced 93.5% negatives, 4% positives, 0.6% invalids, and 1.8% borderlines. Within the borderline results, 5044 were repeated, with 59.2%, 20.0% and 20.2% resolving to negative, positive and borderline, respectively. Age of subject did not affect retest results; however, time between tests indicated that retest resolution occurred with greatest frequency after 90 days. TB risk factors were provided for 2640 subjects and 17% of low risk subjects with a high initial borderline resolved to negative while 27.6% of subjects with high risk and an initial low borderline resolved to positive, suggesting that these subjects could have been inappropriately classified if using a single cut-off point test with no borderline category. This study demonstrates the utility of the T-SPOT.TB test's borderline category to increase test resolution around the test cut-off point. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnison, Shaughn; Livers-Douglas, Amanda; Barajas-Olalde, Cesar

    The scalable, automated, semipermanent seismic array (SASSA) project led and managed by the Energy & Environmental Research Center (EERC) was designed as a 3-year proof-of-concept study to evaluate and demonstrate an innovative application of the seismic method. The concept was to use a sparse surface array of 96 nodal seismic sensors paired with a single, remotely operated active seismic source at a fixed location to monitor for CO 2 saturation changes in a subsurface reservoir by processing the data for time-lapse changes at individual, strategically chosen reservoir reflection points. The combination of autonomous equipment and modern processing algorithms was usedmore » to apply the seismic method in a manner different from the normal paradigm of collecting a spatially dense data set to produce an image. It was used instead to monitor individual, strategically chosen reservoir reflection points for detectable signal character changes that could be attributed to the passing of a CO 2 saturation front or, possibly, changes in reservoir pressure. Data collection occurred over the course of 1 year at an oil field undergoing CO 2 injection for enhanced oil recovery (EOR) and focused on four overlapping “five-spot” EOR injector–producer patterns. Selection, procurement, configuration, installation, and testing of project equipment and collection of five baseline data sets were completed in advance of CO 2 injection within the study area. Weekly remote data collection produced 41 incremental time-lapse records for each of the 96 nodes. Validation was provided by two methods: 1) a conventional 2-D seismic line acquired through the center of the study area before injection started and again after the project ended and processed in a time-lapse manner and 2) by CO 2 saturation maps created from reservoir simulations based on injection and production history matching. Interpreted results were encouraging but mixed, with indications of changes likely due to the presence of CO 2 on some node reflection points where and when effects would be expected and noneffects where no CO 2 was expected, while results at some locations where simulation outputs suggested CO 2 should be present were ambiguous. Acquisition noise impacted interpretation of data at several locations. Many lessons learned were generated by the study to inform and improve results on a follow-up study. The ultimate aim of the project was to evaluate whether deployment of a SASSA technology can provide a useful and cost-effective monitoring solution for future CO 2 injection projects. The answer appears to be affirmative, with the expectation that lessons learned applied to future iterations, together with technology advances, will likely result in significant improvements.« less

  18. Diagnostics of underwater electrical wire explosion through a time- and space-resolved hard x-ray source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheftman, D.; Shafer, D.; Efimov, S.

    2012-10-15

    A time- and space-resolved hard x-ray source was developed as a diagnostic tool for imaging underwater exploding wires. A {approx}4 ns width pulse of hard x-rays with energies of up to 100 keV was obtained from the discharge in a vacuum diode consisting of point-shaped tungsten electrodes. To improve contrast and image quality, an external pulsed magnetic field produced by Helmholtz coils was used. High resolution x-ray images of an underwater exploding wire were obtained using a sensitive x-ray CCD detector, and were compared to optical fast framing images. Future developments and application of this diagnostic technique are discussed.

  19. The Creep of Laminated Synthetic Resin Plastics

    NASA Technical Reports Server (NTRS)

    Perkuhn, H

    1941-01-01

    The long-time loading strength of a number of laminated synthetic resin plastics was ascertained and the effect of molding pressure and resin content determined. The best value was observed with a 30 to 40 percent resin content. The long-time loading strength also increases with increasing molding pressure up to 250 kg/cm(exp 2); a further rise in pressure affords no further substantial improvement. The creep strength is defined as the load which in the hundredth hour of loading produces a rate of elongation of 5 X 10(exp -4) percent per hour. The creep strength values of different materials were determined and tabulated. The effect of humidity during long-term tests is pointed out.

  20. The Stanford-Ames portable echocardioscope - A case study in technology transfer

    NASA Technical Reports Server (NTRS)

    Schmidt, G.; Miller, H.

    1975-01-01

    The paper describes a lightweight portable battery-powered echocardioscope fabricated largely from readily available components. The transducer contains a piezoelectric crystal which acts as both an ultrasound pulse emitter and echo receiver, and the oscilloscope is of modular construction. The oscilloscope display can be produced in any of three different modes: A-mode, B-mode, and M-mode (time-motion) by sweeping the intensified points of light of the B-mode display vertically along the oscilloscope face. The resulting display can be photographed in a time exposure, thus providing a hardcopy record for the patient's chart or physician's records. The device is clinically validated on both normal subjects and patients by experienced echocardiographers.

  1. Diagnostics of underwater electrical wire explosion through a time- and space-resolved hard x-ray source.

    PubMed

    Sheftman, D; Shafer, D; Efimov, S; Gruzinsky, K; Gleizer, S; Krasik, Ya E

    2012-10-01

    A time- and space-resolved hard x-ray source was developed as a diagnostic tool for imaging underwater exploding wires. A ~4 ns width pulse of hard x-rays with energies of up to 100 keV was obtained from the discharge in a vacuum diode consisting of point-shaped tungsten electrodes. To improve contrast and image quality, an external pulsed magnetic field produced by Helmholtz coils was used. High resolution x-ray images of an underwater exploding wire were obtained using a sensitive x-ray CCD detector, and were compared to optical fast framing images. Future developments and application of this diagnostic technique are discussed.

  2. FAMILIARITY TRANSFER AS AN EXPLANATION OF THE DÉJÀ VU EFFECT.

    PubMed

    Małecki, M

    2015-06-01

    Déjà vu is often explained in terms of an unconscious transfer of familiarity between a familiar object or objects and accompanying new objects. However, empirical research tests more the priming effectiveness than such a transfer. This paper reviews the main explanations of déjà vu, proposes a cognitive model of the phenomenon, and tests its six major assumptions. The model states that a sense of familiarity can be felt toward an objectively new stimulus (point 1) and that it can be transferred from a known stimulus to a novel one (point 2) in a situation where the person is unaware of such a transfer (point 3). The criteria for déjà vu are that the known and the novel stimuli may have graphical or semantic similarity, but differences exclude priming explanations (point 4); the familiarity measure should be of an non-rational nature (sense of familiarity rather than recognition; point 5); and that the feeling of familiarity toward a novel stimuli produces a conflict, which could be measured by means of increased reaction (point 6). 119 participants were tested in three experiments. The participants were to assess the novel stimuli in terms of their sense of familiarity. The novel stimuli were primed or were not primed by the known stimulus (Exp. 1) or primed by the known vs a novel stimulus (Exp. 2 and 3). The priming was subliminal in all the experiments. Reaction times were measured in Exps. 2 and 3. The participants assessed the novel stimuli as more familiar when they were preceded by a known stimulus than when they were not (Exp. 1) or when they were preceded by a novel stimulus (Exps. 2 and 3). Reaction times were longer for assessments preceded by known stimulus than for assessments preceded by a novel stimulus, which contradicts the priming explanations. The results seem to support all six points of the proposed model of the mechanisms underlying the déjà vu experience.

  3. Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW

    NASA Technical Reports Server (NTRS)

    DeLoach, RIchard; Philipsen, Iwan

    2007-01-01

    This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.

  4. Performance comparison of optimal fractional order hybrid fuzzy PID controllers for handling oscillatory fractional order processes with dead time.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu

    2013-07-01

    Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Hydrodynamic interaction of two particles in confined linear shear flow at finite Reynolds number

    NASA Astrophysics Data System (ADS)

    Yan, Yiguang; Morris, Jeffrey F.; Koplik, Joel

    2007-11-01

    We discuss the hydrodynamic interactions of two solid bodies placed in linear shear flow between parallel plane walls in a periodic geometry at finite Reynolds number. The computations are based on the lattice Boltzmann method for particulate flow, validated here by comparison to previous results for a single particle. Most of our results pertain to cylinders in two dimensions but some examples are given for spheres in three dimensions. Either one mobile and one fixed particle or else two mobile particles are studied. The motion of a mobile particle is qualitatively similar in both cases at early times, exhibiting either trajectory reversal or bypass, depending upon the initial vector separation of the pair. At longer times, if a mobile particle does not approach a periodic image of the second, its trajectory tends to a stable limit point on the symmetry axis. The effect of interactions with periodic images is to produce nonconstant asymptotic long-time trajectories. For one free particle interacting with a fixed second particle within the unit cell, the free particle may either move to a fixed point or take up a limit cycle. Pairs of mobile particles starting from symmetric initial conditions are shown to asymptotically reach either fixed points, or mirror image limit cycles within the unit cell, or to bypass one another (and periodic images) indefinitely on a streamwise periodic trajectory. The limit cycle possibility requires finite Reynolds number and arises as a consequence of streamwise periodicity when the system length is sufficiently short.

  6. a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments

    NASA Astrophysics Data System (ADS)

    Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.

    2017-09-01

    We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basu, Banasri; Bandyopadhyay, Pratul; Majumdar, Priyadarshi

    We have studied quantum phase transition induced by a quench in different one-dimensional spin systems. Our analysis is based on the dynamical mechanism which envisages nonadiabaticity in the vicinity of the critical point. This causes spin fluctuation which leads to the random fluctuation of the Berry phase factor acquired by a spin state when the ground state of the system evolves in a closed path. The two-point correlation of this phase factor is associated with the probability of the formation of defects. In this framework, we have estimated the density of defects produced in several one-dimensional spin chains. At themore » critical region, the entanglement entropy of a block of L spins with the rest of the system is also estimated which is found to increase logarithmically with L. The dependence on the quench time puts a constraint on the block size L. It is also pointed out that the Lipkin-Meshkov-Glick model in point-splitting regularized form appears as a combination of the XXX model and Ising model with magnetic field in the negative z axis. This unveils the underlying conformal symmetry at criticality which is lost in the sharp point limit. Our analysis shows that the density of defects as well as the scaling behavior of the entanglement entropy follows a universal behavior in all these systems.« less

  8. Objectivity and validity of EMG method in estimating anaerobic threshold.

    PubMed

    Kang, S-K; Kim, J; Kwon, M; Eom, H

    2014-08-01

    The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.

  9. Using 50 years of soil radiocarbon data to identify optimal approaches for estimating soil carbon residence times

    NASA Astrophysics Data System (ADS)

    Baisden, W. T.; Canessa, S.

    2013-01-01

    In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14C to determine residence times, by estimating the amount of ‘bomb 14C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.

  10. A cluster merging method for time series microarray with production values.

    PubMed

    Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio

    2014-09-01

    A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.

  11. unWISE: Unblurred Coadds of the WISE Imaging

    NASA Astrophysics Data System (ADS)

    Lang, Dustin

    2014-05-01

    The Wide-field Infrared Survey Explorer (WISE) satellite observed the full sky in four mid-infrared bands in the 2.8-28 μm range. The primary mission was completed in 2010. The WISE team has done a superb job of producing a series of high-quality, well-documented, complete data releases in a timely manner. However, the "Atlas Image" coadds that are part of the recent AllWISE and previous data releases were intentionally blurred. Convolving the images by the point-spread function while coadding results in "matched-filtered" images that are close to optimal for detecting isolated point sources. But these matched-filtered images are sub-optimal or inappropriate for other purposes. For example, we are photometering the WISE images at the locations of sources detected in the Sloan Digital Sky Survey through forward modeling, and this blurring decreases the available signal-to-noise by effectively broadening the point-spread function. This paper presents a new set of coadds of the WISE images that have not been blurred. These images retain the intrinsic resolution of the data and are appropriate for photometry preserving the available signal-to-noise. Users should be cautioned, however, that the W3- and W4-band coadds contain artifacts around large, bright structures (large galaxies, dusty nebulae, etc.); eliminating these artifacts is the subject of ongoing work. These new coadds, and the code used to produce them, are publicly available at http://unwise.me.

  12. Botulinum toxin a in the treatment of chronic tension-type headache with cervical myofascial trigger points: a randomized, double-blind, placebo-controlled pilot study.

    PubMed

    Harden, R Norman; Cottrill, Jerod; Gagnon, Christine M; Smitherman, Todd A; Weinland, Stephan R; Tann, Beverley; Joseph, Petra; Lee, Thomas S; Houle, Timothy T

    2009-05-01

    To evaluate the efficacy of botulinum toxin A (BT-A) as a prophylactic treatment for chronic tension-type headache (CTTH) with myofascial trigger points (MTPs) producing referred head pain. Although BT-A has received mixed support for the treatment of TTH, deliberate injection directly into the cervical MTPs very often found in this population has not been formally evaluated. Patients with CTTH and specific MTPs producing referred head pain were assigned randomly to receive intramuscular injections of BT-A or isotonic saline (placebo) in a double-blind design. Daily headache diaries, pill counts, trigger point pressure algometry, range of motion assessment, and responses to standardized pain and psychological questionnaires were used as outcome measures; patients returned for follow-up assessment at 2 weeks, 1 month, 2 months, and 3 months post injection. After 3 months, all patients were offered participation in an open-label extension of the study. Effect sizes were calculated to index treatment effects among the intent-to-treat population; individual time series models were computed for average pain intensity. The 23 participants reported experiencing headache on a near-daily basis (average of 27 days/month). Compared with placebo, patients in the BT-A group reported greater reductions in headache frequency during the first part of the study (P = .013), but these effects dissipated by week 12. Reductions in headache intensity over time did not differ significantly between groups (P = .80; maximum d = 0.13), although a larger proportion of BT-A patients showed evidence of statistically significant improvements in headache intensity in the time series analyses (62.5% for BT-A vs 30% for placebo). There were no differences between the groups on any of the secondary outcome measures. The evidence for BT-A in headache is mixed, and even more so in CTTH. However, the putative technique of injecting BT-A directly into the ubiquitous MTPs in CTTH is partially supported in this pilot study. Definitive trials with larger samples are needed to test this hypothesis further.

  13. From Invention to Innovation: Risk Analysis to Integrate One Health Technology in the Dairy Farm.

    PubMed

    Lombardo, Andrea; Boselli, Carlo; Amatiste, Simonetta; Ninci, Simone; Frazzoli, Chiara; Dragone, Roberto; De Rossi, Alberto; Grasso, Gerardo; Mantovani, Alberto; Brajon, Giovanni

    2017-01-01

    Current Hazard Analysis Critical Control Points (HACCP) approaches mainly fit for food industry, while their application in primary food production is still rudimentary. The European food safety framework calls for science-based support to the primary producers' mandate for legal, scientific, and ethical responsibility in food supply. The multidisciplinary and interdisciplinary project ALERT pivots on the development of the technological invention (BEST platform) and application of its measurable (bio)markers-as well as scientific advances in risk analysis-at strategic points of the milk chain for time and cost-effective early identification of unwanted and/or unexpected events of both microbiological and toxicological nature. Health-oriented innovation is complex and subject to multiple variables. Through field activities in a dairy farm in central Italy, we explored individual components of the dairy farm system to overcome concrete challenges for the application of translational science in real life and (veterinary) public health. Based on an HACCP-like approach in animal production, the farm characterization focused on points of particular attention (POPAs) and critical control points to draw a farm management decision tree under the One Health view (environment, animal health, food safety). The analysis was based on the integrated use of checklists (environment; agricultural and zootechnical practices; animal health and welfare) and laboratory analyses of well water, feed and silage, individual fecal samples, and bulk milk. The understanding of complex systems is a condition to accomplish true innovation through new technologies. BEST is a detection and monitoring system in support of production security, quality and safety: a grid of its (bio)markers can find direct application in critical points for early identification of potential hazards or anomalies. The HACCP-like self-monitoring in primary production is feasible, as well as the biomonitoring of live food producing animals as sentinel population for One Health.

  14. The Odyssey of Episodic Memories: Identifying the Paths and Processes Through Which They Contribute to Well-Being.

    PubMed

    Philippe, Frederick L; Bernard-Desrosiers, Léa

    2017-08-01

    This research highlights the processes through which lasting episodic memories and their characterized level of need satisfaction (autonomy, competence, and relatedness) can impact well-being, both at the situational level and over time. Study 1 (N = 92, M age  = 42.07 years, 72% female) investigated the effect of the unconscious activation of a personal episodic memory on situational well-being using a subliminal priming procedure. Study 2 (N = 275, M age  = 22.45 years, 84% female) followed the odyssey of an episodic memory by examining at various points over time its abstraction into perceptions of general need satisfaction and its long-term effect on well-being. Study 1 revealed that the activation of a need-satisfying memory produced an immediate increase in well-being, whereas the triggering of a need-thwarting memory led to an immediate decrease in well-being compared to controls. Study 2 revealed little influence of individual differences, but need satisfaction in episodic memories had a significant cumulative impact on well-being at different points in time over months and was abstracted into greater perceptions of general need satisfaction over time. Results provide convincing evidence for the directive function of memories on well-being, both at the situational level and over time. © 2016 Wiley Periodicals, Inc.

  15. The roles of NMDA receptor activation and nucleus reticularis gigantocellularis in the time-dependent changes in descending inhibition after inflammation.

    PubMed

    Terayama, R; Dubner, R; Ren, K

    2002-05-01

    Previous studies indicate that descending modulation of nociception is progressively increased following persistent inflammation. The present study was designed to further examine the role of supraspinal neurons in descending modulation following persistent inflammation. Constant levels of paw withdrawal (PW) and tail flick (TF) latencies to noxious heat stimuli were achieved in lightly anesthetized rats (pentobarbital sodium 3-10 mg/kg/h, i.v.). Electrical stimulation (ES, 0.1 ms, 100 Hz, 20-200 A) was delivered to the rostral ventromedial medulla (RVM), mainly the nucleus raphe magnus (NRM). ES produced intensity-dependent inhibition of PW and TF. Following a unilateral hindpaw inflammation produced by injection of complete Freund's adjuvant (CFA), ES-produced inhibition underwent time-dependent changes. There was an initial decrease at 3 h after inflammation and a subsequent increase after inflammation in the excitability of RVM neurons and the inhibition of nocifensive responses. These changes were most robust after stimulation of the inflamed paw although similar findings were seen on the non-inflamed paw and tail. The inflammation-induced dynamic changes in descending modulation appeared to be correlated with changes in the activation of the N-methyl--aspartate (NMDA) excitatory amino acid receptor. Microinjection of an NMDA receptor antagonist, AP5 (1 pmol), resulted in an increase in the current intensity required for inhibition of the PW and TF. The effect of AP5 was less at 3 h after inflammation and significantly greater at 11-24 h after inflammation. In a subsequent experiment, ES-produced inhibition of nocifensive responses after inflammation was examined following selective chemical lesions of the nuclei reticularis gigantocellularis (NGC). Compared to vehicle-injected animals, microinjection of a soma-selective excitotoxin, ibotenic acid, enhanced ES-produced inhibition at 3 h but not at 24 h after inflammation. We propose that these time course changes reflect dynamic alterations in concomitant descending facilitation and inhibition. At early time points, NMDA receptor and NGC activation enhance descending facilitation; as time progresses, the dose-response curve of NMDA shifts to the left and descending inhibition dominates and masks any descending facilitation.

  16. Spatiotemporal models of global soil organic carbon stock to support land degradation assessments at regional and global scales: limitations, challenges and opportunities

    NASA Astrophysics Data System (ADS)

    Hengl, Tomislav; Heuvelink, Gerard; Sanderman, Jonathan; MacMillan, Robert

    2017-04-01

    There is an increasing interest in fitting and applying spatiotemporal models that can be used to assess and monitor soil organic carbon stocks (SOCS), for example, in support of the '4 pourmille' initiative aiming at soil carbon sequestration towards climate change adaptation and mitigation and UN's Land Degradation Neutrality indicators and similar degradation assessment projects at regional and global scales. The land cover mapping community has already produced several spatiotemporal data sets with global coverage and at relatively fine resolution e.g. USGS MODIS land cover annual maps for period 2000-2014; European Space Agency land cover maps at 300 m resolution for the year 2000, 2005 and 2010; Chinese GlobeLand30 dataset available for years 2000 and 2010; Columbia University's WRI GlobalForestWatch with deforestation maps at 30 m resolution for the period 2000-2016 (Hansen et al. 2013). These data sets can be used for land degradation assessment and scenario testing at global and regional scales (Wei et al 2014). Currently, however, no compatible global spatiotemporal data sets exist on status of soil quality and/or soil health (Powlson et al. 2013). This paper describes an initial effort to devise and evaluate a procedure for mapping spatio-temporal changes in SOC stocks using a complete stack of soil forming factors (climate, relief, land cover, land use, lithology and living organisms) represented mainly through remote sensing based time series of Earth images. For model building we used some 75,000 geo-referenced soil profiles and a stacks space-time covariates (land cover, land use, biomass, climate) at two standard resolutions: (1) 10 km resolution with data available for period 1920-2014 and (2) 1000 m resolution with data available for period 2000-2014. The initial results show that, although it is technically feasible to produce space time estimates of SOCS that demonstrate the procedure, the estimates are relatively uncertain (<45% of variation explained) and lead to obvious artifacts, especially in areas that have not be represented in time-dimension (temporal extrapolation). For some regions that possess somewhat adequate amounts of point data in space and time (e.g. USA) relatively credible space time estimates can be produced. By adding more training data (both legacy and newly collected points) these models can be gradually improved until they can become operational for decision making and scenario testing.

  17. Formation kinetics of furfuryl alcohol in a coffee model system.

    PubMed

    Albouchi, Abdullatif; Murkovic, Michael

    2018-03-15

    The production of furfuryl alcohol from green coffee during roasting and the effect of multiple parameters on its formation were studied employing HPLC-DAD. Results show that coffee produces furfuryl alcohol in larger quantities (418µg/g) compared to other beans or seeds (up to 132µg/g) roasted under the same conditions. The kinetics of furfuryl alcohol production resemble those of other process contaminants (e.g., HMF, acrylamide) produced in coffee roasting, with temperature and time of roasting playing significant roles in quantities formed. Different coffee species yielded different amounts of furfuryl alcohol. The data point out that the amounts of furfuryl alcohol found in roasted coffee do not reflect the total amounts produced during roasting because great amounts of furfuryl alcohol (up to 57%) are evaporating and released to the atmosphere during roasting. Finally the effect of the moisture content on furfuryl alcohol formation was found to be of little impact. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Portable method of measuring gaseous acetone concentrations.

    PubMed

    Worrall, Adam D; Bernstein, Jonathan A; Angelopoulos, Anastasios P

    2013-08-15

    Measurement of acetone in human breath samples has been previously shown to provide significant non-invasive diagnostic insight into the control of a patient's diabetic condition. In patients with diabetes mellitus, the body produces excess amounts of ketones such as acetone, which are then exhaled during respiration. Using various breath analysis methods has allowed for the accurate determination of acetone concentrations in exhaled breath. However, many of these methods require instrumentation and pre-concentration steps not suitable for point-of-care use. We have found that by immobilizing resorcinol reagent into a perfluorosulfonic acid polymer membrane, a controlled organic synthesis reaction occurs with acetone in a dry carrier gas. The immobilized, highly selective product of this reaction (a flavan) is found to produce a visible spectrum color change which could measure acetone concentrations to less than ppm. We here demonstrate how this approach can be used to produce a portable optical sensing device for real-time, non-invasive acetone analysis. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Cosmological density fluctuations produced by vacuum strings

    NASA Astrophysics Data System (ADS)

    Vilenkin, A.

    1981-04-01

    Consideration is given to the possible role of vacuum domain strings produced in the grand unification phase transition in the early universe in the generation of the density fluctuations giving rise to galaxies. The cosmological evolution of the strings formed in the grand unification phase transition is analyzed, with attention given to possible mechanisms for the damping out of oscillations produced by tension in convoluted strings and closed loops. The cosmological density fluctuations introduced by infinite strings and closed loops smaller than the horizon are then shown to be capable of giving rise to mass condensations on a scale of approximately 10 to the 9th solar masses at the time of the decoupling of radiation from matter, around which the galaxies condense. Differences between the present theory and that suggested by Zel'dovich (1980) are pointed out, and it is noted that string formation at the grand unification phase transition is possible only if the manifold of the degenerate vacua of the gauge theory is not simply connected.

  20. Point sensitive NMR imaging system using a magnetic field configuration with a spatial minimum

    DOEpatents

    Eberhard, P.H.

    A point-sensitive NMR imaging system in which a main solenoid coil produces a relatively strong and substantially uniform magnetic field and a pair of perturbing coils powered by current in the same direction superimposes a pair of relatively weak perturbing fields on the main field to produce a resultant point of minimum field strength at a desired location in a direction along the Z-axis. Two other pairs of perturbing coils superimpose relatively weak field gradients on the main field in directions along the X- and Y-axes to locate the minimum field point at a desired location in a plane normal to the Z-axes. An rf generator irradiates a tissue specimen in the field with radio frequency energy so that desired nuclei in a small volume at the point of minimum field strength will resonate.

  1. Changes in Protein Structure and Distribution Observed at Pre-Clinical Stages of Scrapie Pathogenesis

    PubMed Central

    Kretlow, Ariane; Wang, Qi; Beekes, Michael; Naumann, Dieter; Miller, Lisa M.

    2011-01-01

    Scrapie is a neurodegenerative disorder that involves the misfolding, aggregation and accumulation of the prion protein (PrP). The normal cellular PrP (PrPC) is rich in α-helical secondary structure, whereas the disease-associated pathogenic form of the protein (PrPSc) has an anomalously high β-sheet content. In this study, protein structural changes were examined in situ in the dorsal root ganglia from perorally 263K scrapie-infected and mock-infected hamsters using synchrotron Fourier Transform InfraRed Microspectroscopy (FTIRM) at four time points over the course of the disease (preclinical, 100 & 130 days post-infection (dpi); first clinical signs (~145 dpi); and terminal (~170 dpi)). Results showed clear changes in the total protein content, structure, and distribution as the disease progressed. At pre-clinical time points, the scrapie-infected animals exhibited a significant increase in protein expression, but the β-sheet protein content was significantly lower than controls. Based on these findings, we suggest that the pre-clinical stages of scrapie are characterized by an overexpression of proteins low in β-sheet content. As the disease progressed, the β-sheet content increased significantly. Immunostaining with a PrP-specific antibody, 3F4, confirmed that this increase was partly – but not solely – due to the formation of PrPSc in the tissue and indicated that other proteins high in β-sheet were produced, either by overexpression or misfolding. Elevated β-sheet was observed near the cell membrane at pre-clinical time points and also in the cytoplasm of infected neurons at later stages of infection. At the terminal stage of the disease, the protein expression declined significantly, likely due to degeneration and death of neurons. These dramatic changes in protein content and structure, especially at pre-clinical time points, emphasize the possibility for identifying other proteins involved in early pathogenesis, which are important for further understanding the disease. PMID:18625306

  2. Health Benefits of an Innovative Exercise Program for Mitochondrial Disorders.

    PubMed

    Fiuza-Luces, Carmen; Díez-Bermejo, Jorge; Fernández-DE LA Torre, Miguel; Rodríguez-Romo, Gabriel; Sanz-Ayán, Paz; Delmiro, Aitor; Munguía-Izquierdo, Diego; Rodríguez-Gómez, Irene; Ara, Ignacio; Domínguez-González, Cristina; Arenas, Joaquín; Martín, Miguel A; Lucia, Alejandro; Morán, María

    2018-06-01

    We determined the effects of an innovative 8-wk exercise intervention (aerobic, resistance, and inspiratory muscle training) for patients with mitochondrial disease. Several end points were assessed in 12 patients (19-59 yr, 4 women) at pretraining, posttraining, and after 4-wk detraining: aerobic power, muscle strength/power and maximal inspiratory pressure (main end points), ability to perform activities of daily living, body composition, quality of life, and blood myokines (secondary end points). The program was safe, with patients' adherence being 94% ± 5%. A significant time effect was found for virtually all main end points (P ≤ 0.004), indicating a training improvement. Similar findings (P ≤ 0.003) were found for activities of daily living tests, total/trunk/leg lean mass, total fat mass, femoral fracture risk, and general health perception. No differences were found for blood myokines, except for an acute exertional increase in interleukin 8 at posttraining/detraining (P = 0.002) and in fatty acid binding protein 3 at detraining (P = 0.002). An intervention including novel exercises for mitochondrial disease patients (e.g., inspiratory muscle training) produced benefits in numerous indicators of physical capacity and induced a previously unreported shift toward a healthier body composition phenotype.

  3. Laser Surveying

    NASA Technical Reports Server (NTRS)

    1978-01-01

    NASA technology has produced a laser-aided system for surveying land boundaries in difficult terrain. It does the job more accurately than conventional methods, takes only one-third the time normally required, and is considerably less expensive. In surveying to mark property boundaries, the objective is to establish an accurate heading between two "corner" points. This is conventionally accomplished by erecting a "range pole" at one point and sighting it from the other point through an instrument called a theodolite. But how do you take a heading between two points which are not visible to each other, for instance, when tall trees, hills or other obstacles obstruct the line of sight? That was the problem confronting the U.S. Department of Agriculture's Forest Service. The Forest Service manages 187 million acres of land in 44 states and Puerto Rico. Unfortunately, National Forest System lands are not contiguous but intermingled in complex patterns with privately-owned land. In recent years much of the private land has been undergoing development for purposes ranging from timber harvesting to vacation resorts. There is a need for precise boundary definition so that both private owners and the Forest Service can manage their properties with confidence that they are not trespassing on the other's land.

  4. A smart market for nutrient credit trading to incentivize wetland construction

    NASA Astrophysics Data System (ADS)

    Raffensperger, John F.; Prabodanie, R. A. Ranga; Kostel, Jill A.

    2017-03-01

    Nutrient trading and constructed wetlands are widely discussed solutions to reduce nutrient pollution. Nutrient markets usually include agricultural nonpoint sources and municipal and industrial point sources, but these markets rarely include investors who construct wetlands to sell nutrient reduction credits. We propose a new market design for trading nutrient credits, with both point source and non-point source traders, explicitly incorporating the option of landowners to build nutrient removal wetlands. The proposed trading program is designed as a smart market with centralized clearing, done with an optimization. The market design addresses the varying impacts of runoff over space and time, and the lumpiness of wetland investments. We simulated the market for the Big Bureau Creek watershed in north-central Illinois. We found that the proposed smart market would incentivize wetland construction by assuring reasonable payments for the ecosystem services provided. The proposed market mechanism selects wetland locations strategically taking into account both the cost and nutrient removal efficiencies. The centralized market produces locational prices that would incentivize farmers to reduce nutrients, which is voluntary. As we illustrate, wetland builders' participation in nutrient trading would enable the point sources and environmental organizations to buy low cost nutrient credits.

  5. A New Era in Geodesy and Cartography: Implications for Landing Site Operations

    NASA Technical Reports Server (NTRS)

    Duxbury, T. C.

    2001-01-01

    The Mars Global Surveyor (MGS) Mars Orbiter Laser Altimeter (MOLA) global dataset has ushered in a new era for Mars local and global geodesy and cartography. These data include the global digital terrain model (Digital Terrain Model (DTM) radii), the global digital elevation model (Digital Elevation Model (DEM) elevation with respect to the geoid), and the higher spatial resolution individual MOLA ground tracks. Currently there are about 500,000,000 MOLA points and this number continues to grow as MOLA continues successful operations in orbit about Mars, the combined processing of radiometric X-band Doppler and ranging tracking of MGS together with millions of MOLA orbital crossover points has produced global geodetic and cartographic control having a spatial (latitude/longitude) accuracy of a few meters and a topographic accuracy of less than 1 meter. This means that the position of an individual MOLA point with respect to the center-of-mass of Mars is know to an absolute accuracy of a few meters. The positional accuracy of this point in inertial space over time is controlled by the spin rate uncertainty of Mars which is less than 1 km over 10 years that will be improved significantly with the next landed mission.

  6. A telemedicine model for integrating point-of-care testing into a distributed health-care environment.

    PubMed

    Villalar, J L; Arredondo, M T; Meneu, T; Traver, V; Cabrera, M F; Guillen, S; Del Pozo, F

    2002-01-01

    Centralized testing demands costly laboratories, which are inefficient and may provide poor services. Recent advances make it feasible to move clinical testing nearer to patients and the requesting physicians, thus reducing the time to treatment. Internet technologies can be used to create a virtual laboratory information system in a distributed health-care environment. This allows clinical testing to be transferred to a cooperative scheme of several point-of-care testing (POCT) nodes. Two pilot virtual laboratories were established, one in Italy (AUSL Modena) and one in Greece (Athens Medical Centre). They were constructed on a three-layer model to allow both technical and clinical verification. Different POCT devices were connected. The pilot sites produced good preliminary results in relation to user acceptance, efficiency, convenience and costs. Decentralized laboratories can be expected to become cost-effective.

  7. Digital ac monitor

    DOEpatents

    Hart, George W.; Kern, Jr., Edward C.

    1987-06-09

    An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer.

  8. Accelerometer Method and Apparatus for Integral Display and Control Functions

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1996-01-01

    Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. Art accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.

  9. Coalbed-methane pilots - timing, design, and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roadifer, R.D.; Moore, T.R.

    2009-10-15

    Four distinct sequential phases form a recommended process for coalbed-methane (CBM)-prospect assessment: initial screening reconnaissance, pilot testing, and final appraisal. Stepping through these four phases provides a program of progressively ramping work and cost, while creating a series of discrete decision points at which analysis of results and risks can be assessed. While discussing each of these phases in some degree, this paper focuses on the third, the critically important pilot-testing phase. This paper contains roughly 30 specific recommendations and the fundamental rationale behind each recommendation to help ensure that a CBM pilot will fulfill its primary objectives of (1)more » demonstrating whether the subject coal reservoir will desorb and produce consequential gas and (2) gathering the data critical to evaluate and risk the prospect at the next-often most critical-decision point.« less

  10. Accelerometer Method and Apparatus for Integral Display and Control Functions

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1998-01-01

    Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto is discussed. An accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.

  11. Digital ac monitor

    DOEpatents

    Hart, G.W.; Kern, E.C. Jr.

    1987-06-09

    An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer. 24 figs.

  12. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles

    PubMed Central

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe

    2017-01-01

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. PMID:28718788

  13. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles.

    PubMed

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian

    2017-07-18

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  14. Effects of 100MeV protons delivered at 0.5 or 1cGy/min on the in vivo induction of early and delayed chromosomal damage.

    PubMed

    Rithidech, Kanokporn Noy; Honikel, Louise M; Reungpatthanaphong, Paiboon; Tungjai, Montree; Golightly, Marc; Whorton, Elbert B

    2013-08-30

    Little is known about in vivo cytogenetic effects of protons delivered at the dose and dose rates encountered in space. We determined the effects of 100MeV protons, one of the most abundant type of protons produced during solar particle events (SPE), on the induction of chromosome aberrations (CAs) in bone marrow (BM) cells collected at early (3 and 24h) and late (6 months) time-points from groups of BALB/cJ mice (a known radiosensitive strain) exposed whole-body to 0 (sham-controls), 0.5, or 1.0Gy of 100MeV protons, delivered at 0.5 or 1.0cGy/min. These doses and dose-rates are comparable to those produced during SPE events. Additionally, groups of mice were exposed to 0 or 1Gy of (137)Cs γ rays (delivered at 1cGy/min) as a reference radiation. The kinetics of formation/reduction of gamma-histone 2-AX (γH2AX) were determined in BM cells collected at 1.5, 3, and 24h post-irradiation to assess the early-response. There were five mice per treatment-group per harvest-time. Our data indicated that the kinetics of γH2AX formation/reduction differed, depending on the dose and dose rate of protons. Highly significant numbers of abnormal cells and chromatid breaks (p<0.01), related to those in sham-control groups, were detected in BM cells collected at each time-point, regardless of dose or dose-rate. The finding of significant increases in the frequencies of delayed non-clonal and clonal CAs in BM cells collected at a late time-point from exposed mice suggested that 0.5 or 1Gy of 100MeV protons is capable of inducing genomic instability in BM cells. However, the extent of effects induced by these two low dose rates was comparable. Further, the results showed that the in vivo cytogenetic effects induced by 1Gy of 100MeV protons or (137)Cs γ rays (delivered at 1cGy/min) were similar. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. ERTS-1 imagery as an aid to the understanding of the regional setting of base metal deposits in the North West Cape Province, South Africa. [mineral exploration

    NASA Technical Reports Server (NTRS)

    Viljoen, R. P.

    1974-01-01

    A number of base metal finds have recently focussed attention on the North Western Cape Province of South Africa as an area of great potential mineral wealth. From the point of view of competitive mineral exploration it was essential that an insight into the regional geological controls of the base metal mineralization of the area be obtained as rapidly as possible. Conventional methods of producing a suitable regional geological map were considered to be too time-consuming and ERTS-1 imagery was consequently examined. This imagery has made a significant contribution in the compilation of a suitable map on which to base further mineral exploration programmes. The time involved in the compilation of maps of this nature was found to be only a fraction of the time necessary for the production of similar maps using other methods. ERTS imagery is therefore considered to be valuable in producing accurate regional maps in areas where little or no geological data are available, or in areas of poor access. Furthermore, these images have great potential for rapidly defining the regional extent of metallogenic provinces.

  16. Influence of Cements Containing Calcareous Fly Ash as a Main Component Properties of Fresh Cement Mixtures

    NASA Astrophysics Data System (ADS)

    Gołaszewski, Jacek; Kostrzanowska-Siedlarz, Aleksandra; Ponikiewski, Tomasz; Miera, Patrycja

    2017-10-01

    The main goal of presented research was to examine usability of cements containing calcareous fly ash (W) from technological point of view. In the paper the results of tests concerning the influence of CEM II and CEM IV cements containing fly ash (W) on rheological properties, air content, setting times and plastic shrinkage of mortars are presented and discussed. Moreover, compatibility of plasticizers with cements containing fly ash (W) was also studied. Additionally, setting time and hydration heat of cements containing calcareous fly ash (W) were determined. In a broader aspect, the research contributes to promulgation of the possibility of using calcareous fly ash (W) in cement and concrete technology, what greatly benefits the environment protection (utilization of waste fly ash). Calcareous fly ash can be used successfully as the main component of cement. Cements produced by blending with processed fly ash or cements produced by interginding are characterized by acceptable technological properties. In respect to CEM I cements, cements containing calcareous fly ash worsen workability, decrease air content, delay setting time of mixtures. Cements with calcareous fly ash show good compatibility with plasticizers.

  17. Mapping Cryo-volcanic Activity from Enceladus’ South Polar Region

    NASA Astrophysics Data System (ADS)

    Tigges, Mattie; Spitale, Joseph N.

    2017-10-01

    Using Cassini images taken of Enceladus’ south polar plumes at various times and orbital locations, we are producing maps of eruptive activity at various times. The purpose of this experiment is to understand the mechanism that controls the cryo-volcanic eruptions.The current hypothesis is that Tiger Stripe activity is modulated by tidal forcing, which would predict a correlation between orbital phase and the amount and distribution of eruptive activity. The precise nature of those correlations depends on how the crust is failing and how the plumbing system is organized.We use simulated curtains of ejected material that are superimposed over Cassini images, obtained during thirteen different flybys, taken between mid-2009 and mid-2012. Each set represents a different time and location in Enceladus’ orbit about Saturn, and contains images of the plumes from various angles. Shadows cast onto the backlit ejected material by the terminator of the moon are used to determine which fractures were active at that point in the orbit.Maps of the spatial distribution of eruptive activity at various orbital phases can be used to evaluate various hypotheses about the failure modes that produce the eruptions.

  18. The molecular physics of photolytic fractionation of sulfur and oxygen isotopes in planetary atmospheres (Invited)

    NASA Astrophysics Data System (ADS)

    Johnson, M. S.; Schmidt, J. A.; Hattori, S.; Danielache, S.; Meusinger, C.; Schinke, R.; Ueno, Y.; Nanbu, S.; Kjaergaard, H. G.; Yoshida, N.

    2013-12-01

    Atmospheric photochemistry is able to produce large mass independent anomalies in atmospheric trace gases that can be found in geological and cryospheric records. This talk will present theoretical and experimental investigations of the molecular mechanisms producing photolytic fractionation of isotopes with special attention to sulfur and oxygen. The zero point vibrational energy (ZPE) shift and reflection principle theories are starting points for estimating isotopic fractionation, but these models ignore effects arising from isotope-dependent changes in couplings between surfaces, excited state dynamics, line densities and hot band populations. The isotope-dependent absorption spectra of the isotopologues of HCl, N2O, OCS, CO2 and SO2 have been examined in a series of papers and these results are compared with experiment and ZPE/reflection principle models. Isotopic fractionation in planetary atmospheres has many interesting applications. The UV absorption of CO2 is the basis of photochemistry in the CO2-rich atmospheres of the ancient Earth, and of Mars and Venus. For the first time we present accurate temperature and isotope dependent CO2 absorption cross sections with important implications for photolysis rates of SO2 and H2O, and the production of a mass independent anomaly in the Ox reservoir. Experimental and theoretical results for OCS have implications for the modern stratospheric sulfur budget. The absorption bands of SO2 are complex with rich structure producing isotopic fractionation in photolysis and photoexcitation.

  19. Optical EVPA rotations in blazars: testing a stochastic variability model with RoboPol data

    NASA Astrophysics Data System (ADS)

    Kiehlmann, S.; Blinov, D.; Pearson, T. J.; Liodakis, I.

    2017-12-01

    We identify rotations of the polarization angle in a sample of blazars observed for three seasons with the RoboPol instrument. A simplistic stochastic variability model is tested against this sample of rotation events. The model is capable of producing samples of rotations with parameters similar to the observed ones, but fails to reproduce the polarization fraction at the same time. Even though we can neither accept nor conclusively reject the model, we point out various aspects of the observations that are fully consistent with a random walk process.

  20. Evaluation of Reaction Cross Section Data Used for Thin Layer Activation Technique

    NASA Astrophysics Data System (ADS)

    Ditrói, F.; Takács, S.; Tárkányi, F.

    2005-05-01

    Thin layer activation (TLA) is a widely used nuclear method to investigate and control the loss of material during wear, corrosion and erosion processes. The process requires knowledge of depth profiles of the investigated radioisotopes produced by charged particle bombardment. The depth distribution of the activity can be determined with direct, very time-consuming step by step measurement or by calculation from reliable cross section, stopping power and sample composition data. These data were checked experimentally at several points performing only a couple of measurements.

  1. Evaluation of Reaction Cross Section Data Used for Thin Layer Activation Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ditroi, F.; Takacs, S.; Tarkanyi, F.

    2005-05-24

    Thin layer activation (TLA) is a widely used nuclear method to investigate and control the loss of material during wear, corrosion and erosion processes. The process requires knowledge of depth profiles of the investigated radioisotopes produced by charged particle bombardment. The depth distribution of the activity can be determined with direct, very time-consuming step by step measurement or by calculation from reliable cross section, stopping power and sample composition data. These data were checked experimentally at several points performing only a couple of measurements.

  2. End Point of Black Ring Instabilities and the Weak Cosmic Censorship Conjecture.

    PubMed

    Figueras, Pau; Kunesch, Markus; Tunyasuvunakool, Saran

    2016-02-19

    We produce the first concrete evidence that violation of the weak cosmic censorship conjecture can occur in asymptotically flat spaces of five dimensions by numerically evolving perturbed black rings. For certain thin rings, we identify a new, elastic-type instability dominating the evolution, causing the system to settle to a spherical black hole. However, for sufficiently thin rings the Gregory-Laflamme mode is dominant, and the instability unfolds similarly to that of black strings, where the horizon develops a structure of bulges connected by necks which become ever thinner over time.

  3. Computer Vision for the Solar Dynamics Observatory (SDO)

    NASA Astrophysics Data System (ADS)

    Martens, P. C. H.; Attrill, G. D. R.; Davey, A. R.; Engell, A.; Farid, S.; Grigis, P. C.; Kasper, J.; Korreck, K.; Saar, S. H.; Savcheva, A.; Su, Y.; Testa, P.; Wills-Davey, M.; Bernasconi, P. N.; Raouafi, N.-E.; Delouille, V. A.; Hochedez, J. F.; Cirtain, J. W.; Deforest, C. E.; Angryk, R. A.; de Moortel, I.; Wiegelmann, T.; Georgoulis, M. K.; McAteer, R. T. J.; Timmons, R. P.

    2012-01-01

    In Fall 2008 NASA selected a large international consortium to produce a comprehensive automated feature-recognition system for the Solar Dynamics Observatory (SDO). The SDO data that we consider are all of the Atmospheric Imaging Assembly (AIA) images plus surface magnetic-field images from the Helioseismic and Magnetic Imager (HMI). We produce robust, very efficient, professionally coded software modules that can keep up with the SDO data stream and detect, trace, and analyze numerous phenomena, including flares, sigmoids, filaments, coronal dimmings, polarity inversion lines, sunspots, X-ray bright points, active regions, coronal holes, EIT waves, coronal mass ejections (CMEs), coronal oscillations, and jets. We also track the emergence and evolution of magnetic elements down to the smallest detectable features and will provide at least four full-disk, nonlinear, force-free magnetic field extrapolations per day. The detection of CMEs and filaments is accomplished with Solar and Heliospheric Observatory (SOHO)/ Large Angle and Spectrometric Coronagraph (LASCO) and ground-based Hα data, respectively. A completely new software element is a trainable feature-detection module based on a generalized image-classification algorithm. Such a trainable module can be used to find features that have not yet been discovered (as, for example, sigmoids were in the pre- Yohkoh era). Our codes will produce entries in the Heliophysics Events Knowledgebase (HEK) as well as produce complete catalogs for results that are too numerous for inclusion in the HEK, such as the X-ray bright-point metadata. This will permit users to locate data on individual events as well as carry out statistical studies on large numbers of events, using the interface provided by the Virtual Solar Observatory. The operations concept for our computer vision system is that the data will be analyzed in near real time as soon as they arrive at the SDO Joint Science Operations Center and have undergone basic processing. This will allow the system to produce timely space-weather alerts and to guide the selection and production of quicklook images and movies, in addition to its prime mission of enabling solar science. We briefly describe the complex and unique data-processing pipeline, consisting of the hardware and control software required to handle the SDO data stream and accommodate the computer-vision modules, which has been set up at the Lockheed-Martin Space Astrophysics Laboratory (LMSAL), with an identical copy at the Smithsonian Astrophysical Observatory (SAO).

  4. Characteristics of VLF/LF Sferics from Elve-producing Lightning Discharges

    NASA Astrophysics Data System (ADS)

    Blaes, P.; Zoghzoghy, F. G.; Marshall, R. A.

    2013-12-01

    Lightning return strokes radiate an electromagnetic pulse (EMP) which interacts with the D-region ionosphere; the largest EMPs produce new ionization, heating, and optical emissions known as elves. Elves are at least six times more common than sprites and other transient luminous events. Though the probability that a lightning return stroke will produce an elve is correlated with the return stroke peak current, many large peak current strokes do not produce visible elves. Apart from the lightning peak current, elve production may depend on the return stroke speed, lightning altitude, and ionospheric conditions. In this work we investigate the detailed structure of lightning that gives rise to elves by analyzing the characteristics of VLF/LF lightning sferics in conjunction with optical elve observations. Lightning sferics were observed using an array of six VLF/LF receivers (1 MHz sample-rate) in Oklahoma, and elves were observed using two high-speed photometers pointed over the Oklahoma region: one located at Langmuir Laboratory, NM and the other at McDonald Observatory, TX. Hundreds of elves with coincident LF sferics were observed during the summer months of 2013. We present data comparing the characteristics of elve-producing and non-elve producing lightning as measured by LF sferics. In addition, we compare these sferic and elve observations with FDTD simulations to determine key properties of elve-producing lightning.

  5. Increased productivity in poultry birds by sub-lethal dose of antibiotics is arbitrated by selective enrichment of gut microbiota, particularly short-chain fatty acid producers.

    PubMed

    Banerjee, Sohini; Sar, Abhijit; Misra, Arijit; Pal, Srikanta; Chakraborty, Arindom; Dam, Bomba

    2018-02-01

    Antibiotics are widely used at sub-lethal concentrations as a feed supplement to enhance poultry productivity. To understand antibiotic-induced temporal changes in the structure and function of gut microbiota of chicken, two flocks were maintained for six weeks on a carbohydrate- and protein-rich diet. The feed in the conventional diet (CD) group was supplemented with sub-lethal doses of chlorotetracycline, virginiamycin and amoxicillin, while the organic diet (OD) had no such addition. Antibiotic-fed birds were more productive, with a lower feed conversion ratio (FCR). Their faecal samples also had higher total heterotrophic bacterial load and antibiotic resistance capability. Deep sequencing of 16S rDNA V1-V2 amplicons revealed Firmicutes as the most dominant phylum at all time points, with the predominant presence of Lactobacillales members in the OD group. The productivity indicator, i.e. higher Firmicutes:Bacteroidetes ratio, particularly in the late growth phase, was more marked in CD amplicon sequences, which was supported by culture-based enumerations on selective media. CD datasets also showed the prevalence of known butyrate-producing genera such as Faecalibacterium, Ruminococcus, Blautia, Coprococcus and Bacteroides, which correlates closely with their higher PICRUSt-based in silico predicted 'glycan biosynthesis and metabolism'-related Kyoto Encyclopedia of Genes and Genomes (KEGG) orthologues. Semi-quantitative end-point PCR targeting of the butyryl-CoA: acetate CoA-transferase gene also confirmed butyrate producers as being late colonizers, particularly in antibiotic-fed birds in both the CD flocks and commercial rearing farms. Thus, antibiotics preferentially enrich bacterial populations, particularly short-chain fatty acid producers that can efficiently metabolize hitherto undigestable feed material such as glycans, thereby increasing the energy budget of the host and its productivity.

  6. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    NASA Technical Reports Server (NTRS)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  7. Modelling short time series in metabolomics: a functional data analysis approach.

    PubMed

    Montana, Giovanni; Berk, Maurice; Ebbels, Tim

    2011-01-01

    Metabolomics is the study of the complement of small molecule metabolites in cells, biofluids and tissues. Many metabolomic experiments are designed to compare changes observed over time under two or more experimental conditions (e.g. a control and drug-treated group), thus producing time course data. Models from traditional time series analysis are often unsuitable because, by design, only very few time points are available and there are a high number of missing values. We propose a functional data analysis approach for modelling short time series arising in metabolomic studies which overcomes these obstacles. Our model assumes that each observed time series is a smooth random curve, and we propose a statistical approach for inferring this curve from repeated measurements taken on the experimental units. A test statistic for detecting differences between temporal profiles associated with two experimental conditions is then presented. The methodology has been applied to NMR spectroscopy data collected in a pre-clinical toxicology study.

  8. Dike emplacement and the birth of the Yellowstone hotspot, western USA

    NASA Astrophysics Data System (ADS)

    Glen, J. M.; Ponce, D. A.; Nomade, S.; John, D. A.

    2003-04-01

    The birth of the Yellowstone hotspot in middle Miocene time was marked by extensive flood basalt volcanism. Prominent aeromagnetic anomalies (referred to collectively as the Northern Nevada rifts), extending hundreds of kilometers across Nevada, are thought to represent dike swarms injected at the time of flood volcanism. Until now, however, dikes from only one of these anomalies (eastern) have been documented, sampled, and dated (40Ar/ 39Ar ages range from 15.4 +/-0.2 to 16.7 +/-0.5Ma; John et al., 2000, ages recalculated using the FCS standard age of 28.02 +/-0.28Ma). We present new paleomagnetic data and an 40Ar/ 39Ar age of 16.6 +/-0.3Ma for a mafic dike suggesting that all the anomalies likely originate from the same mid-Miocene fracturing event. The magnetic anomalies, together with the trends of dike swarms, faults, and fold axes produce a radiating pattern that converges on a point near the Oregon-Idaho boarder. We speculate that this pattern formed by stresses imposed by the impact of the Yellowstone hotspot. Glen and Ponce (2002) propose a simple stress model to account for this fracture pattern that consists of a point source of stress at the base of the crust and a regional stress field aligned with the presumed middle Miocene stress direction. Overlapping point and regional stresses result in stress trajectories that form a radiating pattern near the point source (i.e., hotspot). Far from the influence of the point stress, however, stress trajectories verge towards the NNW-trending regional stress direction (i.e., plate boundary stresses), similar to the pattern of dike swarm traces. Glen and Ponce, 2002, Geology, 30, 7, 647-650 John et al., 2000, Geol. Soc. Nev. Sym. Proc., May 15-18, 2000, 127-154

  9. Toward an integrated ice core chronology using relative and orbital tie-points

    NASA Astrophysics Data System (ADS)

    Bazin, L.; Landais, A.; Lemieux-Dudon, B.; Toyé Mahamadou Kele, H.; Blunier, T.; Capron, E.; Chappellaz, J.; Fischer, H.; Leuenberger, M.; Lipenkov, V.; Loutre, M.-F.; Martinerie, P.; Parrenin, F.; Prié, F.; Raynaud, D.; Veres, D.; Wolff, E.

    2012-04-01

    Precise ice cores chronologies are essential to better understand the mechanisms linking climate change to orbital and greenhouse gases concentration forcing. A tool for ice core dating (DATICE [developed by Lemieux-Dudon et al., 2010] permits to generate a common time-scale integrating relative and absolute dating constraints on different ice cores, using an inverse method. Nevertheless, this method has only been applied for a 4-ice cores scenario and for the 0-50 kyr time period. Here, we present the bases for an extension of this work back to 800 ka using (1) a compilation of published and new relative and orbital tie-points obtained from measurements of air trapped in ice cores and (2) an adaptation of the DATICE inputs to 5 ice cores for the last 800 ka. We first present new measurements of δ18Oatm and δO2/N2 on the Talos Dome and EPICA Dome C (EDC) ice cores with a particular focus on Marine Isotopic Stages (MIS) 5, and 11. Then, we show two tie-points compilations. The first one is based on new and published CH4 and δ18Oatm measurements on 5 ice cores (NorthGRIP, EPICA Dronning Maud Land, EDC, Talos Dome and Vostok) in order to produce a table of relative gas tie-points over the last 400 ka. The second one is based on new and published records of δO2/N2, δ18Oatm and air content to provide a table of orbital tie-points over the last 800 ka. Finally, we integrate the different dating constraints presented above in the DATICE tool adapted to 5 ice cores to cover the last 800 ka and show how these constraints compare with the established gas chronologies of each ice core.

  10. Revealing turning points in ecosystem functioning over the Northern Eurasian agricultural frontier.

    PubMed

    Horion, Stéphanie; Prishchepov, Alexander V; Verbesselt, Jan; de Beurs, Kirsten; Tagesson, Torbern; Fensholt, Rasmus

    2016-08-01

    The collapse of the Soviet Union in 1991 has been a turning point in the World history that left a unique footprint on the Northern Eurasian ecosystems. Conducting large scale mapping of environmental change and separating between naturogenic and anthropogenic drivers is a difficult endeavor in such highly complex systems. In this research a piece-wise linear regression method was used for breakpoint detection in Rain-Use Efficiency (RUE) time series and a classification of ecosystem response types was produced. Supported by earth observation data, field data, and expert knowledge, this study provides empirical evidence regarding the occurrence of drastic changes in RUE (assessment of the timing, the direction and the significance of these changes) in Northern Eurasian ecosystems between 1982 and 2011. About 36% of the study area (3.4 million km(2) ) showed significant (P < 0.05) trends and/or turning points in RUE during the observation period. A large proportion of detected turning points in RUE occurred around the fall of the Soviet Union in 1991 and in the following years which were attributed to widespread agricultural land abandonment. Our study also showed that recurrent droughts deeply affected vegetation productivity throughout the observation period, with a general worsening of the drought conditions in recent years. Moreover, recent human-induced turning points in ecosystem functioning were detected and attributed to ongoing recultivation and change in irrigation practices in the Volgograd region, and to increased salinization and increased grazing intensity around Lake Balkhash. The ecosystem-state assessment method introduced here proved to be a valuable support that highlighted hotspots of potentially altered ecosystems and allowed for disentangling human from climatic disturbances. © 2016 John Wiley & Sons Ltd.

  11. Merged Real Time GNSS Solutions for the READI System

    NASA Astrophysics Data System (ADS)

    Santillan, V. M.; Geng, J.

    2014-12-01

    Real-time measurements from increasingly dense Global Navigational Satellite Systems (GNSS) networks located throughout the western US offer a substantial, albeit largely untapped, contribution towards the mitigation of seismic and other natural hazards. Analyzed continuously in real-time, currently over 600 instruments blanket the San Andreas and Cascadia fault systems of the North American plate boundary and can provide on-the-fly characterization of transient ground displacements highly complementary to traditional seismic strong-motion monitoring. However, the utility of GNSS systems depends on their resolution, and merged solutions of two or more independent estimation strategies have been shown to offer lower scatter and higher resolution. Towards this end, independent real time GNSS solutions produced by Scripps Inst. of Oceanography and Central Washington University (PANGA) are now being formally combined in pursuit of NASA's Real-Time Earthquake Analysis for Disaster Mitigation (READI) positioning goals. CWU produces precise point positioning (PPP) solutions while SIO produces ambiguity resolved PPP solutions (PPP-AR). The PPP-AR solutions have a ~5 mm RMS scatter in the horizontal and ~10mm in the vertical, however PPP-AR solutions can take tens of minutes to re-converge in case of data gaps. The PPP solutions produced by CWU use pre-cleaned data in which biases are estimated as non-integer ambiguities prior to formal positioning with GIPSY 6.2 using a real time stream editor developed at CWU. These solutions show ~20mm RMS scatter in the horizontal and ~50mm RMS scatter in the vertical but re-converge within 2 min. or less following cycle-slips or data outages. We have implemented the formal combination of the CWU and SCRIPPS ENU displacements using the independent solutions as input measurements to a simple 3-element state Kalman filter plus white noise. We are now merging solutions from 90 stations, including 30 in Cascadia, 39 in the Bay Area, and 21 from S. California. Six months of merged time series demonstrate that the combined solution is more reliable and can take advantage of the strengths of the individual solutions while mitigating their weaknesses. The merging can be easily extended to three or more independent analysis strategies, which may be considered in the future

  12. Emergence of gravity, fermion, gauge and Chern-Simons fields during formation of N-dimensional manifolds from joining point-like ones

    NASA Astrophysics Data System (ADS)

    Sepehri, Alireza; Shoorvazi, Somayyeh

    In this paper, we will consider the birth and evolution of fields during formation of N-dimensional manifolds from joining point-like ones. We will show that at the beginning, only there are point-like manifolds which some strings are attached to them. By joining these manifolds, 1-dimensional manifolds are appeared and gravity, fermion, and gauge fields are emerged. By coupling these manifolds, higher dimensional manifolds are produced and higher orders of fermion, gauge fields and gravity are emerged. By decaying N-dimensional manifold, two child manifolds and a Chern-Simons one are born and anomaly is emerged. The Chern-Simons manifold connects two child manifolds and leads to the energy transmission from the bulk to manifolds and their expansion. We show that F-gravity can be emerged during the formation of N-dimensional manifold from point-like manifolds. This type of F-gravity includes both type of fermionic and bosonic gravity. G-fields and also C-fields which are produced by fermionic strings produce extra energy and change the gravity.

  13. Sediment Dynamics Over a Stable Point bar of the San Pedro River, Southeastern Arizona

    NASA Astrophysics Data System (ADS)

    Hamblen, J. M.; Conklin, M. H.

    2002-12-01

    Streams of the Southwest receive enormous inputs of sediment during storm events in the monsoon season due to the high intensity rainfall and large percentages of exposed soil in the semi-arid landscape. In the Upper San Pedro River, with a watershed area of approximately 3600 square kilometers, particle size ranges from clays to boulders with large fractions of sand and gravel. This study focuses on the mechanics of scour and fill on a stable point bar. An innovative technique using seven co-located scour chains and liquid-filled, load-cell scour sensors characterized sediment dynamics over the point bar during the monsoon season of July to September 2002. The sensors were set in two transects to document sediment dynamics near the head and toe of the bar. Scour sensors record area-averaged sediment depths while scour chains measure scour and fill at a point. The average area covered by each scour sensor is 11.1 square meters. Because scour sensors have never been used in a system similar to the San Pedro, one goal of the study was to test their ability to detect changes in sediment load with time in order to determine the extent of scour and fill during monsoonal storms. Because of the predominantly unconsolidated nature of the substrate it was hypothesized that dune bedforms would develop in events less than the 1-year flood. The weak 2002 monsoon season produced only two storms that completely inundated the point bar, both less than the 1-year flood event. The first event, 34 cms, produced net deposition in areas where Johnson grass had been present and was now buried. The scour sensor at the lowest elevation, in a depression which serves as a secondary channel during storm events, recorded scour during the rising limb of the hydrograph followed by pulses we interpret to be the passage of dunes. The second event, although smaller at 28 cms, resulted from rain more than 50 km upstream and had a much longer peak and a slowly declining falling limb. During the second flood, several areas with buried vegetation were scoured back to their original bed elevations. Pulses of sediment passed over the sensor in the secondary channel and the sensor in the vegetated zone. Scour sensor measurements agree with data from scour chains (error +/- 3 cm) and surveys (error +/- 0.6 cm) performed before and after the two storm events, within the range of error of each method. All load sensor data were recorded at five minute intervals. Use of a smaller interval could give more details about the shapes of sediment waves and aid in bedform determination. Results suggest that dune migration is the dominant mechanism for scour and backfill in the point bar setting. Scour sensors, when coupled with surveying and/or scour chains, are a tremendous addition to the geomorphologist's toolbox, allowing unattended real-time measurements of sediment depth with time.

  14. Data-driven nonlinear optimisation of a simple air pollution dispersion model generating high resolution spatiotemporal exposure

    NASA Astrophysics Data System (ADS)

    Yuval; Bekhor, Shlomo; Broday, David M.

    2013-11-01

    Spatially detailed estimation of exposure to air pollutants in the urban environment is needed for many air pollution epidemiological studies. To benefit studies of acute effects of air pollution such exposure maps are required at high temporal resolution. This study introduces nonlinear optimisation framework that produces high resolution spatiotemporal exposure maps. An extensive traffic model output, serving as proxy for traffic emissions, is fitted via a nonlinear model embodying basic dispersion properties, to high temporal resolution routine observations of traffic-related air pollutant. An optimisation problem is formulated and solved at each time point to recover the unknown model parameters. These parameters are then used to produce a detailed concentration map of the pollutant for the whole area covered by the traffic model. Repeating the process for multiple time points results in the spatiotemporal concentration field. The exposure at any location and for any span of time can then be computed by temporal integration of the concentration time series at selected receptor locations for the durations of desired periods. The methodology is demonstrated for NO2 exposure using the output of a traffic model for the greater Tel Aviv area, Israel, and the half-hourly monitoring and meteorological data from the local air quality network. A leave-one-out cross-validation resulted in simulated half-hourly concentrations that are almost unbiased compared to the observations, with a mean error (ME) of 5.2 ppb, normalised mean error (NME) of 32%, 78% of the simulated values are within a factor of two (FAC2) of the observations, and the coefficient of determination (R2) is 0.6. The whole study period integrated exposure estimations are also unbiased compared with their corresponding observations, with ME of 2.5 ppb, NME of 18%, FAC2 of 100% and R2 that equals 0.62.

  15. Validation of VIIRS Land Surface Phenology using Field Observations, PhenoCam Imagery, and Landsat data

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Jayavelu, S.; Wang, J.; Henebry, G. M.; Gray, J. M.; Friedl, M. A.; Liu, Y.; Schaaf, C.; Shuai, A.

    2016-12-01

    A large number of land surface phenology (LSP) products have been produced from various detection algorithms applied to coarse resolution satellite datasets across regional to global scales. However, validation of the resulting LSP products is very challenging because in-situ observations at comparable spatiotemporal scales are generally not available. This research focuses on efforts to evaluate and validate the global 500m LSP product produced from Visible Infrared Imaging Radiometer Suite (VIIRS) NBAR time series for 2013 and 2014. Specifically, we used three different datasets to evaluate six VIIRS LSP metrics of greenup onset, mid-point of greenup phase, maturity onset, senescence onset, mid-point of senescence phase, and dormancy onset. First, we obtained the field observations from the USA National Phenology Network that has gathered extensive phenological data on individual species. Although it is inappropriate to compare these data directly with the LSP footprints, this large and spatially distributed dataset allows us to evaluate the overall quality of VIIRS LSP results. Second, we gathered PhenoCam imagery from 164 sites, which was used to extract the daily green chromatic coordinate (GCC) and vegetation contrast index (VCI)values. Utilizing these PhenoCam time series, the phenological events were quantified using a hybrid piecewise logistic models for each site. Third, we detected the phenological timing at the landscape scale (30m) from surface reflectance simulated by fusing MODIS data and Landsat 8 OLI observations in an agricultural area (in the central USA) and from overlap zones of OLI scenes in semiarid areas (California and Tibetan Plateau). The phenological timing from these three datasets was used to compare with VIIRS LSP data. Preliminary results show that the VIIRS LSP are generally comparable with phenological data from the USA-NPN, PhenoCam, and Landsat data, with differences arising in specific phenological events and land cover types.

  16. Point-of-Purchase Advertising. Learning Activity.

    ERIC Educational Resources Information Center

    Shackelford, Ray

    1998-01-01

    In this technology education activity, students learn the importance of advertising, conduct a day-long survey of advertising strategies, and design and produce a tabletop point-of-purchase advertisement. (JOW)

  17. Effects of Orbit and Pointing Geometry of a Spaceborne Formation for Monostatic-Bistatic Radargrammetry on Terrain Elevation Measurement Accuracy

    PubMed Central

    Renga, Alfredo; Moccia, Antonio

    2009-01-01

    During the last decade a methodology for the reconstruction of surface relief by Synthetic Aperture Radar (SAR) measurements – SAR interferometry – has become a standard. Different techniques developed before, such as stereo-radargrammetry, have been experienced from space only in very limiting geometries and time series, and, hence, branded as less accurate. However, novel formation flying configurations achievable by modern spacecraft allow fulfillment of SAR missions able to produce pairs of monostatic-bistatic images gathered simultaneously, with programmed looking angles. Hence it is possible to achieve large antenna separations, adequate for exploiting to the utmost the stereoscopic effect, and to make negligible time decorrelation, a strong liming factor for repeat-pass stereo-radargrammetric techniques. This paper reports on design of a monostatic-bistatic mission, in terms of orbit and pointing geometry, and taking into account present generation SAR and technology for accurate relative navigation. Performances of different methods for monostatic-bistatic stereo-radargrammetry are then evaluated, showing the possibility to determine the local surface relief with a metric accuracy over a wide range of Earth latitudes. PMID:22389594

  18. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  19. Strategies for single-point diamond machining a large format germanium blazed immersion grating

    NASA Astrophysics Data System (ADS)

    Montesanti, R. C.; Little, S. L.; Kuzmenko, P. J.; Bixler, J. V.; Jackson, J. L.; Lown, J. G.; Priest, R. E.; Yoxall, B. E.

    2016-07-01

    A large format germanium immersion grating was flycut with a single-point diamond tool on the Precision Engineering Research Lathe (PERL) at the Lawrence Livermore National Laboratory (LLNL) in November - December 2015. The grating, referred to as 002u, has an area of 59 mm x 67 mm (along-groove and cross-groove directions), line pitch of 88 line/mm, and blaze angle of 32 degree. Based on total groove length, the 002u grating is five times larger than the previous largest grating (ZnSe) cut on PERL, and forty-five times larger than the previous largest germanium grating cut on PERL. The key risks associated with cutting the 002u grating were tool wear and keeping the PERL machine running uninterrupted in a stable machining environment. This paper presents the strategies employed to mitigate these risks, introduces pre-machining of the as-etched grating substrate to produce a smooth, flat, damage-free surface into which the grooves are cut, and reports on trade-offs that drove decisions and experimental results.

  20. Simulation of the Francis-99 Hydro Turbine During Steady and Transient Operation

    NASA Astrophysics Data System (ADS)

    Dewan, Yuvraj; Custer, Chad; Ivashchenko, Artem

    2017-01-01

    Numerical simulation of the Francis-99 hydroturbine with correlation to experimental measurements are presented. Steady operation of the hydroturbine is analyzed at three operating conditions: the best efficiency point (BEP), high load (HL), and part load (PL). It is shown that global quantities such as net head, discharge and efficiency are well predicted. Additionally, time-averaged velocity predictions compare well with PIV measurements obtained in the draft tube immediately downstream of the runner. Differences in vortex rope structure between operating points are discussed. Unsteady operation of the hydroturbine from BEP to HL and from BEP to PL are modeled. It is shown that simulation methods used to model the steady operation produce predictions that correlate well with experiment for transient operation. Time-domain unsteady simulation is used for both steady and unsteady operation. The full-fidelity geometry including all components is meshed using an unstructured polyhedral mesh with body-fitted prism layers. Guide vane rotation for transient operation is imposed using fully-conservative, computationally efficient mesh morphing. The commercial solver STAR-CCM+ is used for all portions of the analysis including meshing, solving and post-processing.

  1. TeO$$_2$$ bolometers with Cherenkov signal tagging: towards next-generation neutrinoless double-beta decay experiments

    DOE PAGES

    Casali, N.; Vignati, Marco; Beeman, J. W.; ...

    2015-01-14

    CUORE, an array of 988 TeOmore » $$_2$$ bolometers, is about to be one of the most sensitive experiments searching for neutrinoless double-beta decay. Its sensitivity could be further improved by removing the background from α radioactivity. A few years ago it was pointed out that the signal from βs can be tagged by detecting the emitted Cherenkov light, which is not produced by αs. In this paper we confirm this possibility. For the first time we measured the Cherenkov light emitted by a CUORE crystal, and found it to be 100 eV at the Q-value of the decay. To completely reject the α background, we compute that one needs light detectors with baseline noise below 20 eV RMS, a value which is 3–4 times smaller than the average noise of the bolometric light detectors we are using. We point out that an improved light detector technology must be developed to obtain TeO$$_2$$ bolometric experiments able to probe the inverted hierarchy of neutrino masses.« less

  2. Photochemical Ignition Studies. I. Laser Ignition of Flowing Premixed Gases

    DTIC Science & Technology

    1985-02-01

    Combustion," Army Science Conference, West Point, 1984. 1 ? -A.W. Miziolek, R.C. Sausa, and A.J. Alfano , "Efficient Detection of Carbon Atoms Produced...Science Conference, West Point, 1984. 12. A.W. Miziolek, R.C. Sausa, and A.J. Alfano , "Efficient Detection of Carbon Atoms Produced by Argon...61801 Johns Hopkins University/APL Chemical Propulsion Information Agency ATTN: T.W. Christian Johns Hopkins Road Laurel, MD 20707

  3. Direct production of fractionated and upgraded hydrocarbon fuels from biomass

    DOEpatents

    Felix, Larry G.; Linck, Martin B.; Marker, Terry L.; Roberts, Michael J.

    2014-08-26

    Multistage processing of biomass to produce at least two separate fungible fuel streams, one dominated by gasoline boiling-point range liquids and the other by diesel boiling-point range liquids. The processing involves hydrotreating the biomass to produce a hydrotreatment product including a deoxygenated hydrocarbon product of gasoline and diesel boiling materials, followed by separating each of the gasoline and diesel boiling materials from the hydrotreatment product and each other.

  4. Taking the Easy Way Out: How the GED Testing Program Induces Students to Drop Out.

    PubMed

    Heckman, James J; Humphries, John Eric; Lafontaine, Paul A; Rodríguez, Pedro L

    2012-07-01

    The option to obtain a General Education Development (GED) certificate changes the incentives facing high school students. This paper evaluates the effect of three different GED policy innovations on high school graduation rates. A six point decrease in the GED pass rate due to an increase in passing standards produced a 1.3 point decline in overall dropout rates. The introduction of a GED certification program in high schools in Oregon produced a four percent decrease in graduation rates. Introduction of GED certificates in California increased dropout rates by 3 points. The GED program induces high school students to drop out.

  5. Taking the Easy Way Out: How the GED Testing Program Induces Students to Drop Out

    PubMed Central

    Heckman, James J.; Humphries, John Eric; LaFontaine, Paul A.; Rodríguez, Pedro L.

    2011-01-01

    The option to obtain a General Education Development (GED) certificate changes the incentives facing high school students. This paper evaluates the effect of three different GED policy innovations on high school graduation rates. A six point decrease in the GED pass rate due to an increase in passing standards produced a 1.3 point decline in overall dropout rates. The introduction of a GED certification program in high schools in Oregon produced a four percent decrease in graduation rates. Introduction of GED certificates in California increased dropout rates by 3 points. The GED program induces high school students to drop out. PMID:24634564

  6. Automated coregistration of MTI spectral bands

    NASA Astrophysics Data System (ADS)

    Theiler, James P.; Galbraith, Amy E.; Pope, Paul A.; Ramsey, Keri A.; Szymanski, John J.

    2002-08-01

    In the focal plane of a pushbroom imager, a linear array of pixels is scanned across the scene, building up the image one row at a time. For the Multispectral Thermal Imager (MTI), each of fifteen different spectral bands has its own linear array. These arrays are pushed across the scene together, but since each band's array is at a different position on the focal plane, a separate image is produced for each band. The standard MTI data products (LEVEL1B_R_COREG and LEVEL1B_R_GEO) resample these separate images to a common grid and produce coregistered multispectral image cubes. The coregistration software employs a direct ``dead reckoning' approach. Every pixel in the calibrated image is mapped to an absolute position on the surface of the earth, and these are resampled to produce an undistorted coregistered image of the scene. To do this requires extensive information regarding the satellite position and pointing as a function of time, the precise configuration of the focal plane, and the distortion due to the optics. These must be combined with knowledge about the position and altitude of the target on the rotating ellipsoidal earth. We will discuss the direct approach to MTI coregistration, as well as more recent attempts to tweak the precision of the band-to-band registration using correlations in the imagery itself.

  7. Dynamic stretching and golf swing performance.

    PubMed

    Moran, K A; McGrath, T; Marshall, B M; Wallace, E S

    2009-02-01

    The aim of the present study was to examine the effect of dynamic stretching, static stretching and no stretching, as part of a general warm-up, on golf swing performance with a five-iron. Measures of performance were taken 0 min, 5 min, 15 min and 30 min after stretching. Dynamic stretching produced significantly greater club head speeds than both static stretching (Delta=1.9m.s (-1); p=0.000) and no stretching (Delta=1.7 m.s (-1); p=0.000), and greater ball speeds than both static stretching (Delta=3.5m.s (-1); p=0.003) and no stretching (Delta=3.3m.s (-1); p=0.001). Dynamic stretching produced significantly straighter swing-paths than both static stretching (Delta=-0.61 degrees , p=0.000) and no stretching (Delta=-0.72 degrees , p=0.01). Dynamic stretching also produced more central impact points than the static stretch (Delta=0.7 cm, p=0.001). For the club face angle, there was no effect of either stretch or time. For all of the variables measured, there was no significant difference between the static stretch and no stretch conditions. All of the results were unaffected by the time of measurement after stretching. The results indicate that dynamic stretching should be used as part of a general warm-up in golf.

  8. Timing at peak force may be the hidden target controlled in continuation and synchronization tapping.

    PubMed

    Du, Yue; Clark, Jane E; Whitall, Jill

    2017-05-01

    Timing control, such as producing movements at a given rate or synchronizing movements to an external event, has been studied through a finger-tapping task where timing is measured at the initial contact between finger and tapping surface or the point when a key is pressed. However, the point of peak force is after the time registered at the tapping surface and thus is a less obvious but still an important event during finger tapping. Here, we compared the time at initial contact with the time at peak force as participants tapped their finger on a force sensor at a given rate after the metronome was turned off (continuation task) or in synchrony with the metronome (sensorimotor synchronization task). We found that, in the continuation task, timing was comparably accurate between initial contact and peak force. These two timing events also exhibited similar trial-by-trial statistical dependence (i.e., lag-one autocorrelation). However, the central clock variability was lower at the peak force than the initial contact. In the synchronization task, timing control at peak force appeared to be less variable and more accurate than that at initial contact. In addition to lower central clock variability, the mean SE magnitude at peak force (SEP) was around zero while SE at initial contact (SEC) was negative. Although SEC and SEP demonstrated the same trial-by-trial statistical dependence, we found that participants adjusted the time of tapping to correct SEP, but not SEC, toward zero. These results suggest that timing at peak force is a meaningful target of timing control, particularly in synchronization tapping. This result may explain the fact that SE at initial contact is typically negative as widely observed in the preexisting literature.

  9. Single scan parameterization of space-variant point spread functions in image space via a printed array: the impact for two PET/CT scanners.

    PubMed

    Kotasidis, F A; Matthews, J C; Angelis, G I; Noonan, P J; Jackson, A; Price, P; Lionheart, W R; Reader, A J

    2011-05-21

    Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.

  10. Flow convergence caused by a salinity minimum in a tidal channel

    USGS Publications Warehouse

    Warner, John C.; Schoellhamer, David H.; Burau, Jon R.; Schladow, S. Geoffrey

    2006-01-01

    Residence times of dissolved substances and sedimentation rates in tidal channels are affected by residual (tidally averaged) circulation patterns. One influence on these circulation patterns is the longitudinal density gradient. In most estuaries the longitudinal density gradient typically maintains a constant direction. However, a junction of tidal channels can create a local reversal (change in sign) of the density gradient. This can occur due to a difference in the phase of tidal currents in each channel. In San Francisco Bay, the phasing of the currents at the junction of Mare Island Strait and Carquinez Strait produces a local salinity minimum in Mare Island Strait. At the location of a local salinity minimum the longitudinal density gradient reverses direction. This paper presents four numerical models that were used to investigate the circulation caused by the salinity minimum: (1) A simple one-dimensional (1D) finite difference model demonstrates that a local salinity minimum is advected into Mare Island Strait from the junction with Carquinez Strait during flood tide. (2) A three-dimensional (3D) hydrodynamic finite element model is used to compute the tidally averaged circulation in a channel that contains a salinity minimum (a change in the sign of the longitudinal density gradient) and compares that to a channel that contains a longitudinal density gradient in a constant direction. The tidally averaged circulation produced by the salinity minimum is characterized by converging flow at the bed and diverging flow at the surface, whereas the circulation produced by the constant direction gradient is characterized by converging flow at the bed and downstream surface currents. These velocity fields are used to drive both a particle tracking and a sediment transport model. (3) A particle tracking model demonstrates a 30 percent increase in the residence time of neutrally buoyant particles transported through the salinity minimum, as compared to transport through a constant direction density gradient. (4) A sediment transport model demonstrates increased deposition at the near-bed null point of the salinity minimum, as compared to the constant direction gradient null point. These results are corroborated by historically noted large sedimentation rates and a local maximum of selenium accumulation in clams at the null point in Mare Island Strait.

  11. Harmonic Fluxes and Electromagnetic Forces of Concentric Winding Brushless Permanent Magnet Motor

    NASA Astrophysics Data System (ADS)

    Ishibashi, Fuminori; Takemasa, Ryo; Matsushita, Makoto; Nishizawa, Takashi; Noda, Shinichi

    Brushless permanent magnet motors have been widely used in home applications and industrial fields. These days, high efficiency and low noise motors are demanded from the view point of environment. Electromagnetic noise and iron loss of the motor are produced by the harmonic fluxes and electromagnetic forces. However, order and space pattern of these have not been discussed in detail. In this paper, fluxes, electromagnetic forces and magneto-motive forces of brushless permanent magnet motors with concentric winding were analyzed analytically, experimentally and numerically. Time harmonic fluxes and time electromagnetic forces in the air gap were measured by search coils on the inner surface of the stator teeth and analyzed by FEM. Space pattern of time harmonic fluxes and time electromagnetic forces were worked out with experiments and FEM. Magneto motive forces due to concentric winding were analyzed with equations and checked by FEM.

  12. Economic Efficiency and Investment Timing for Dual Water Systems

    NASA Astrophysics Data System (ADS)

    Leconte, Robert; Hughes, Trevor C.; Narayanan, Rangesan

    1987-10-01

    A general methodology to evaluate the economic feasibility of dual water systems is presented. In a first step, a static analysis (evaluation at a single point in time) is developed. The analysis requires the evaluation of consumers' and producer's surpluses from water use and the capital cost of the dual (outdoor) system. The analysis is then extended to a dynamic approach where the water demand increases with time (as a result of a population increase) and where the dual system is allowed to expand. The model determines whether construction of a dual system represents a net benefit, and if so, what is the best time to initiate the system (corresponding to maximization of social welfare). Conditions under which an analytic solution is possible are discussed and results of an application are summarized (including sensitivity to different parameters). The analysis allows identification of key parameters influencing attractiveness of dual water systems.

  13. Brewer spectrometer total ozone column measurements in Sodankylä

    NASA Astrophysics Data System (ADS)

    Karppinen, Tomi; Lakkala, Kaisa; Karhu, Juha M.; Heikkinen, Pauli; Kivi, Rigel; Kyrö, Esko

    2016-06-01

    Brewer total ozone column measurements started in Sodankylä in May 1988, 9 months after the signing of The Montreal Protocol. The Brewer instrument has been well maintained and frequently calibrated since then to produce a high-quality ozone time series now spanning more than 25 years. The data have now been uniformly reprocessed between 1988 and 2014. The quality of the data has been assured by automatic data rejection rules as well as by manual checking. Daily mean values calculated from the highest-quality direct sun measurements are available 77 % of time with up to 75 measurements per day on clear days. Zenith sky measurements fill another 14 % of the time series and winter months are sparsely covered by moon measurements. The time series provides information to survey the evolution of Arctic ozone layer and can be used as a reference point for assessing other total ozone column measurement practices.

  14. Atypical Cities

    ERIC Educational Resources Information Center

    DiJulio, Betsy

    2011-01-01

    In this creative challenge, Surrealism and one-point perspective combine to produce images that not only go "beyond the real" but also beyond the ubiquitous "imaginary city" assignment often used to teach one-point perspective. Perhaps the difference is that in the "atypical cities challenge," an understanding of one-point perspective is a means…

  15. A Telemetry Browser Built with Java Components

    NASA Astrophysics Data System (ADS)

    Poupart, E.

    In the context of CNES balloon scientific campaigns and telemetry survey field, a generic telemetry processing product, called TelemetryBrowser in the following, was developed reusing COTS, Java Components for most of them. Connection between those components relies on a software architecture based on parameter producers and parameter consumers. The first one transmit parameter values to the second one which has registered to it. All of those producers and consumers can be spread over the network thanks to Corba, and over every kind of workstation thanks to Java. This gives a very powerful mean to adapt to constraints like network bandwidth, or workstations processing or memory. It's also very useful to display and correlate at the same time information coming from multiple and various sources. An important point of this architecture is that the coupling between parameter producers and parameter consumers is reduced to the minimum and that transmission of information on the network is made asynchronously. So, if a parameter consumer goes down or runs slowly, there is no consequence on the other consumers, because producers don't wait for their consumers to finish their data processing before sending it to other consumers. An other interesting point is that parameter producers, also called TelemetryServers in the following are generated nearly automatically starting from a telemetry description using Flavori component. Keywords Java components, Corba, distributed application, OpenORBii, software reuse, COTS, Internet, Flavor. i Flavor (Formal Language for Audio-Visual Object Representation) is an object-oriented media representation language being developed at Columbia University. It is designed as an extension of Java and C++ and simplifies the development of applications that involve a significant media processing component (encoding, decoding, editing, manipulation, etc.) by providing bitstream representation semantics. (flavor.sourceforge.net) ii OpenORB provides a Java implementation of the OMG Corba 2.4.2 specification (openorb.sourceforge.net) 1/16

  16. Double-pulse THz radiation bursts from laser-plasma acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosch, R. A.

    2006-11-15

    A model is presented for coherent THz radiation produced when an electron bunch undergoes laser-plasma acceleration and then exits axially from a plasma column. Radiation produced when the bunch is accelerated is superimposed with transition radiation from the bunch exiting the plasma. Computations give a double-pulse burst of radiation comparable to recent observations. The duration of each pulse very nearly equals the electron bunch length, while the time separation between pulses is proportional to the distance between the points where the bunch is accelerated and where it exits the plasma. The relative magnitude of the two pulses depends upon bymore » the radius of the plasma column. Thus, the radiation bursts may be useful in diagnosing the electron bunch length, the location of the bunch's acceleration, and the plasma radius.« less

  17. Improved Quality in Aerospace Testing Through the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.

  18. Methods for Detecting Microbial Methane Production and Consumption by Gas Chromatography.

    PubMed

    Aldridge, Jared T; Catlett, Jennie L; Smith, Megan L; Buan, Nicole R

    2016-04-05

    Methane is an energy-dense fuel but is also a greenhouse gas 25 times more detrimental to the environment than CO 2 . Methane can be produced abiotically by serpentinization, chemically by Sabatier or Fisher-Tropsh chemistry, or biotically by microbes (Berndt et al. , 1996; Horita and Berndt, 1999; Dry, 2002; Wolfe, 1982; Thauer, 1998; Metcalf et al. , 2002). Methanogens are anaerobic archaea that grow by producing methane gas as a metabolic byproduct (Wolfe, 1982; Thauer, 1998). Our lab has developed and optimized three different gas chromatograph-utilizing assays to characterize methanogen metabolism (Catlett et al. , 2015). Here we describe the end point and kinetic assays that can be used to measure methane production by methanogens or methane consumption by methanotrophic microbes. The protocols can be used for measuring methane production or consumption by microbial pure cultures or by enrichment cultures.

  19. The DES Science Verification Weak Lensing Shear Catalogs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarvis, M.

    We present weak lensing shear catalogs for 139 square degrees of data taken during the Science Verification (SV) time for the new Dark Energy Camera (DECam) being used for the Dark Energy Survey (DES). We describe our object selection, point spread function estimation and shear measurement procedures using two independent shear pipelines, IM3SHAPE and NGMIX, which produce catalogs of 2.12 million and 3.44 million galaxies respectively. We also detail a set of null tests for the shear measurements and find that they pass the requirements for systematic errors at the level necessary for weak lensing science applications using the SVmore » data. Furthermore, we discuss some of the planned algorithmic improvements that will be necessary to produce sufficiently accurate shear catalogs for the full 5-year DES, which is expected to cover 5000 square degrees.« less

  20. The DES Science Verification Weak Lensing Shear Catalogs

    DOE PAGES

    Jarvis, M.

    2016-05-01

    We present weak lensing shear catalogs for 139 square degrees of data taken during the Science Verification (SV) time for the new Dark Energy Camera (DECam) being used for the Dark Energy Survey (DES). We describe our object selection, point spread function estimation and shear measurement procedures using two independent shear pipelines, IM3SHAPE and NGMIX, which produce catalogs of 2.12 million and 3.44 million galaxies respectively. We also detail a set of null tests for the shear measurements and find that they pass the requirements for systematic errors at the level necessary for weak lensing science applications using the SVmore » data. Furthermore, we discuss some of the planned algorithmic improvements that will be necessary to produce sufficiently accurate shear catalogs for the full 5-year DES, which is expected to cover 5000 square degrees.« less

  1. The lead time tradeoff: the case of health states better than dead.

    PubMed

    Pinto-Prades, José Luis; Rodríguez-Míguez, Eva

    2015-04-01

    Lead time tradeoff (L-TTO) is a variant of the time tradeoff (TTO). L-TTO introduces a lead period in full health before illness onset, avoiding the need to use 2 different procedures for states better and worse than dead. To estimate utilities, additive separability is assumed. We tested to what extent violations of this assumption can bias utilities estimated with L-TTO. A sample of 500 members of the Spanish general population evaluated 24 health states, using face-to-face interviews. A total of 188 subjects were interviewed with L-TTO and the rest with TTO. Both samples evaluated the same set of 24 health states, divided into 4 groups with 6 health states per set. Each subject evaluated 1 of the sets. A random effects regression model was fitted to our data. Only health states better than dead were included in the regression since it is in this subset where additive separability can be tested clearly. Utilities were higher in L-TTO in relation to TTO (on average L-TTO adds about 0.2 points to the utility of health states), suggesting that additive separability is violated. The difference between methods increased with the severity of the health state. Thus, L-TTO adds about 0.14 points to the average utility of the less severe states, 0.23 to the intermediate states, and 0.28 points to the more severe estates. L-TTO produced higher utilities than TTO. Health problems are perceived as less severe if a lead period in full health is added upfront, implying that there are interactions between disjointed time periods. The advantages of this method have to be compared with the cost of modeling the interaction between periods. © The Author(s) 2014.

  2. The electrolytic inferior vena cava model (EIM) to study thrombogenesis and thrombus resolution with continuous blood flow in the mouse

    PubMed Central

    Diaz, Jose A.; Alvarado, Christine M.; Wrobleski, Shirley K.; Slack, Dallas W.; Hawley, Angela E.; Farris, Diana M.; Henke, Peter K.; Wakefield, Thomas W.; Myers, Daniel D.

    2016-01-01

    Summary Previously, we presented the electrolytic inferior vena cava (IVC) model (EIM) during acute venous thrombosis (VT). Here, we present our evaluation of the EIM for chronic VT time points in order to determine whether this model allows for the study of thrombus resolution. C57BU6 mice (n=191) were utilised. In this model a copper-wire, inserted into a 25-gauge needle, is placed in the distal IVC and another subcutaneously. An electrical current (250 µAmp/15 minutes) activates the endothelial cells, inducing thrombogenesis. Ultrasound, thrombus weight (TW), vein wall leukocyte counts, vein wall thickness/fibrosis scoring, thrombus area and soluble P-selectin (sP-sel) were performed at baseline, days 1, 2, 4, 6, 9, 11 and 14, post EIM. A correlation between TW and sP-sel was also determined. A thrombus formed in each mouse undergoing EIM. Blood flow was documented by ultrasound at all time points. IVC thrombus size increased up to day 2 and then decreased over time, as shown by ultrasound, TW, and sP-sel levels. TW and sP-sel showed a strong positive correlation (r=0.48, p<0.0002). Vein wall neutrophils were the most common cell type present in acute VT (up to day 2) with monocytes becoming the most prevalent in chronic VT (from day 6 to day 14). Thrombus resolution was demonstrated by ultrasound, TW and thrombus area. In conclusion, the EIM produces a non-occlusive and consistent IVC thrombus, in the presence of constant blood flow, allowing for the study of VT at both acute and chronic time points. Thrombus resolution was demonstrated by all modalities utilised in this study. PMID:23571406

  3. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  4. An investigation into the performance of real-time GPS+GLONASS Precise Point Positioning (PPP) in New Zealand

    NASA Astrophysics Data System (ADS)

    Harima, Ken; Choy, Suelynn; Rizos, Chris; Kogure, Satoshi

    2017-09-01

    This paper presents an investigation into the performance of real-time Global Navigation Satellite Systems (GNSS) Precise Point Positioning (PPP) in New Zealand. The motivation of the research is to evaluate the feasibility of using PPP technique and a satellite based augmentation system such as the Japanese Quasi-Zenith Satellite System (QZSS) to deliver a real-time precise positioning solution in support of a nation-wide high accuracy GNSS positioning coverage in New Zealand. Two IGS real-time correction streams are evaluated alongside with the PPP correction messages transmitted by the QZSS satellite known as MDC1. MDC1 corrections stream is generated by Japan Aerospace Exploration Agency (JAXA) using the Multi-GNSS Advanced Demonstration tool for Orbit and Clock Analysis (MADOCA) software and are currently transmitted in test mode by the QZSS satellite. The IGS real-time streams are the CLK9B real-time corrections stream generated by the French Centre National D'études Spatiales (CNES) using the PPP-Wizard software, and the CLK81 real-time corrections stream produced by GMV using their MagicGNSS software. GNSS data is collected from six New Zealand CORS stations operated by Land Information New Zealand (LINZ) over a one-week period in 2015. GPS and GLONASS measurements are processed in a real-time PPP mode using the satellite orbit and clock corrections from the real-time streams. The results show that positioning accuracies of 6 cm in horizontal component and 15 cm in vertical component can be achieved in real-time PPP. The real-time GPS+GLONASS PPP solution required 30 minutes to converge to within 10 cm horizontal positioning accuracy.

  5. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    PubMed

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  6. Applications of UAS-SfM for coastal vulnerability assessment: Geomorphic feature extraction and land cover classification from fine-scale elevation and imagery data

    NASA Astrophysics Data System (ADS)

    Sturdivant, E. J.; Lentz, E. E.; Thieler, E. R.; Remsen, D.; Miner, S.

    2016-12-01

    Characterizing the vulnerability of coastal systems to storm events, chronic change and sea-level rise can be improved with high-resolution data that capture timely snapshots of biogeomorphology. Imagery acquired with unmanned aerial systems (UAS) coupled with structure from motion (SfM) photogrammetry can produce high-resolution topographic and visual reflectance datasets that rival or exceed lidar and orthoimagery. Here we compare SfM-derived data to lidar and visual imagery for their utility in a) geomorphic feature extraction and b) land cover classification for coastal habitat assessment. At a beach and wetland site on Cape Cod, Massachusetts, we used UAS to capture photographs over a 15-hectare coastal area with a resulting pixel resolution of 2.5 cm. We used standard SfM processing in Agisoft PhotoScan to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM). The SfM-derived products have a horizontal uncertainty of +/- 2.8 cm. Using the point cloud in an extraction routine developed for lidar data, we determined the position of shorelines, dune crests, and dune toes. We used the output imagery and DEM to map land cover with a pixel-based supervised classification. The dense and highly precise SfM point cloud enabled extraction of geomorphic features with greater detail than with lidar. The feature positions are reported with near-continuous coverage and sub-meter accuracy. The orthomosaic image produced with SfM provides visual reflectance with higher resolution than those available from aerial flight surveys, which enables visual identification of small features and thus aids the training and validation of the automated classification. We find that the high-resolution and correspondingly high density of UAS data requires some simple modifications to existing measurement techniques and processing workflows, and that the types of data and the quality provided is equivalent to, and in some cases surpasses, that of data collected using other methods.

  7. Integration of ERS and ASAR Time Series for Differential Interferometric SAR Analysis

    NASA Astrophysics Data System (ADS)

    Werner, C. L.; Wegmüller, U.; Strozzi, T.; Wiesmann, A.

    2005-12-01

    Time series SAR interferometric analysis requires SAR data with good temporal sampling covering the time period of interest. The ERS satellites operated by ESA have acquired a large global archive of C-Band SAR data since 1991. The ASAR C-Band instrument aboard the ENVISAT platform launched in 2002 operates in the same orbit as ERS-1 and ERS-2 and has largely replaced the remaining operational ERS-2 satellite. However, interferometry between data acquired by ERS and ASAR is complicated by a 31 MHz offset in the radar center frequency between the instruments leading to decorrelation over distributed targets. Only in rare instances, when the baseline exceeds 1 km, can the spectral shift compensate for the difference in the frequencies of the SAR instruments to produce visible fringes. Conversely, point targets do not decorrelate due to the frequency offset making it possible to incorporate the ERS-ASAR phase information and obtain improved temporal coverage. We present an algorithm for interferometric point target analysis that integrates ERS-ERS, ASAR-ASAR and ERS-ASAR data. Initial analysis using the ERS-ERS data is used to identify the phase stable point-like scatterers within the scene. Height corrections relative to the initial DEM are derived by regression of the residual interferometric phases with respect to perpendicular baseline for a set of ERS-ERS interferograms. The ASAR images are coregistered with the ERS scenes and the point phase values are extracted. The different system pixel spacing values between ERS and ASAR requires additional refinement in the offset estimation and resampling procedure. Calculation of the ERS-ASAR simulated phase used to derive the differential interferometric phase must take into account the slightly different carrrier frequencies. Differential ERS-ASAR point phases contain an additional phase component related to the scatterer location within the resolution element. This additional phase varies over several cycles making the differential interferogram appear as uniform phase noise. We present how this point phase difference can be determined and used to correct the ERS-ASAR interferograms. Further processing proceeds as with standard ERS-ERS interferogram stacks utilizing the unwrapped point phases to obtain estimates of the deformation history, and path delay due to variations in tropospheric water vapor. We show and discuss examples demonstrating the success of this approach.

  8. Temperature and Species Measurements of Combustion Produced by a 9-Point Lean Direct Injector

    NASA Technical Reports Server (NTRS)

    Tedder, Sarah A.; Hicks, Yolanda R.; Locke, Randy J.

    2013-01-01

    This paper presents measurements of temperature and relative species concentrations in the combustion flowfield of a 9-point swirl venturi lean direct injector fueled with JP-8. The temperature and relative species concentrations of the flame produced by the injector were measured using spontaneous Raman scattering (SRS). Results of measurements taken at four flame conditions are presented. The species concentrations reported are measured relative to nitrogen and include oxygen, carbon dioxide, and water.

  9. On the upper part load vortex rope in Francis turbine: Experimental investigation

    NASA Astrophysics Data System (ADS)

    Nicolet, C.; Zobeiri, A.; Maruzewski, P.; Avellan, F.

    2010-08-01

    The swirling flow developing in Francis turbine draft tube under part load operation leads to pressure fluctuations usually in the range of 0.2 to 0.4 times the runner rotational frequency resulting from the so-called vortex breakdown. For low cavitation number, the flow features a cavitation vortex rope animated with precession motion. Under given conditions, these pressure fluctuations may lead to undesirable pressure fluctuations in the entire hydraulic system and also produce active power oscillations. For the upper part load range, between 0.7 and 0.85 times the best efficiency discharge, pressure fluctuations may appear in a higher frequency range of 2 to 4 times the runner rotational speed and feature modulations with vortex rope precession. It has been pointed out that for this particular operating point, the vortex rope features elliptical cross section and is animated of a self-rotation. This paper presents an experimental investigation focusing on this peculiar phenomenon, defined as the upper part load vortex rope. The experimental investigation is carried out on a high specific speed Francis turbine scale model installed on a test rig of the EPFL Laboratory for Hydraulic Machines. The selected operating point corresponds to a discharge of 0.83 times the best efficiency discharge. Observations of the cavitation vortex carried out with high speed camera have been recorded and synchronized with pressure fluctuations measurements at the draft tube cone. First, the vortex rope self rotation frequency is evidenced and the related frequency is deduced. Then, the influence of the sigma cavitation number on vortex rope shape and pressure fluctuations is presented. The waterfall diagram of the pressure fluctuations evidences resonance effects with the hydraulic circuit. The time evolution of the vortex rope volume is compared with pressure fluctuations time evolution using image processing. Finally, the influence of the Froude number on the vortex rope shape and the associated pressure fluctuations is analyzed by varying the rotational speed.

  10. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  11. Magnetospheric disturbance effects on the Equatorial Ionization Anomaly (EIA) : an overview

    NASA Astrophysics Data System (ADS)

    Abdu, M. A.; Sobral, J. H. A.; de Paula, E. R.; Batista, I. S.

    1992-12-01

    The Equatorial lonization Anomaly (EIA) development can undergo drastic modification in the form of an anomalous occurrence at local times outside that of its quiet time development and/or inhibition/enhancement at local times of its normal occurrences. This happens for disturbed electrodynamic conditions of the global ionosphere-thermosphere-magnetosphere system, consequent upon the triggering of a magnetospheric storm event. Direct penetration to equatorial latitudes of the magnetospheric electric fields and the thermospheric disturbances involving winds, electric fields and composition changes produce significant alteration in the EIA morphology and dynamics. Results on statistical behaviour based on accumulated ground-based data sets, and those from recent theoretical modelling efforts and from satellite and ground-based observations, are reviewed. Some outstanding problems of the EIA response to magnetospheric disturbances that deserve attention in the coming years are pointed out.

  12. The Hot-Pressing of Hafnium Carbide (Melting Point, 7030 F)

    NASA Technical Reports Server (NTRS)

    Sanders, William A.; Grisaffe, Salvatore J.

    1960-01-01

    An investigation was undertaken to determine the effects of the hot-pressing variables (temperature, pressure, and time) on the density and grain size of hafnium carbide disks. The purpose was to provide information necessary for the production of high-density test shapes for the determination of physical and mechanical properties. Hot-pressing of -325 mesh hafnium carbide powder was accomplished with a hydraulic press and an inductively heated graphite die assembly. The ranges investigated for each variable were as follows: temperature, 3500 to 4870 F; pressure, 1000 to 6030 pounds per square inch; and time, 5 to 60 minutes. Hafnium carbide bodies of approximately 98 percent theoretical density can be produced under the following minimal conditions: 4230 F, 3500 pounds per square inch, and 15 minutes. Further increases in temperature and time resulted only in greater grain size.

  13. Relationship between seismic status of Earth and relative position of bodies in sun-earth-moon system

    NASA Astrophysics Data System (ADS)

    Kulanin, N. V.

    1985-03-01

    The time spectrum of variations in seismicity is quite broad. There are seismic seasons, as well as multiannual variations. The range of characteristic times of variation from days to about one year is studied. Seismic activity as a function of the position of the moon relative to the Earth and the direction toward the Sun is studied. The moments of strong earthquakes, over 5.8 on the Richter scale, between 1968 and June 1980 are plotted in time coordinates relating them to the relative positions of the three bodies in the sun-earth-moon system. Methods of mathematical statistics are applied to the points produced, indicating at least 99% probability that the distribution was not random. a periodicity of the earth's seismic state of 413 days is observed.

  14. Repliscan: a tool for classifying replication timing regions.

    PubMed

    Zynda, Gregory J; Song, Jawon; Concia, Lorenzo; Wear, Emily E; Hanley-Bowdoin, Linda; Thompson, William F; Vaughn, Matthew W

    2017-08-07

    Replication timing experiments that use label incorporation and high throughput sequencing produce peaked data similar to ChIP-Seq experiments. However, the differences in experimental design, coverage density, and possible results make traditional ChIP-Seq analysis methods inappropriate for use with replication timing. To accurately detect and classify regions of replication across the genome, we present Repliscan. Repliscan robustly normalizes, automatically removes outlying and uninformative data points, and classifies Repli-seq signals into discrete combinations of replication signatures. The quality control steps and self-fitting methods make Repliscan generally applicable and more robust than previous methods that classify regions based on thresholds. Repliscan is simple and effective to use on organisms with different genome sizes. Even with analysis window sizes as small as 1 kilobase, reliable profiles can be generated with as little as 2.4x coverage.

  15. Phototherapy for Improvement of Performance and Exercise Recovery: Comparison of 3 Commercially Available Devices.

    PubMed

    De Marchi, Thiago; Schmitt, Vinicius Mazzochi; Danúbia da Silva Fabro, Carla; da Silva, Larissa Lopes; Sene, Juliane; Tairova, Olga; Salvador, Mirian

    2017-05-01

      Recent studies suggest the prophylactic use of low-powered laser/light has ergogenic effects on athletic performance and postactivity recovery. Manufacturers of high-powered lasers/light devices claim that these can produce the same clinical benefits with increased power and decreased irradiation time; however, research with high-powered lasers is lacking.   To evaluate the magnitude of observed phototherapeutic effects with 3 commercially available devices.   Randomized double-blind placebo-controlled study.   Laboratory.   Forty healthy untrained male participants.   Participants were randomized into 4 groups: placebo, high-powered continuous laser/light, low-powered continuous laser/light, or low-powered pulsed laser/light (comprising both lasers and light-emitting diodes). A single dose of 180 J or placebo was applied to the quadriceps.   Maximum voluntary contraction, delayed-onset muscle soreness (DOMS), and creatine kinase (CK) activity from baseline to 96 hours after the eccentric exercise protocol.   Maximum voluntary contraction was maintained in the low-powered pulsed laser/light group compared with placebo and high-powered continuous laser/light groups in all time points (P < .05). Low-powered pulsed laser/light demonstrated less DOMS than all groups at all time points (P < .05). High-powered continuous laser/light did not demonstrate any positive effects on maximum voluntary contraction, CK activity, or DOMS compared with any group at any time point. Creatine kinase activity was decreased in low-powered pulsed laser/light compared with placebo (P < .05) and high-powered continuous laser/light (P < .05) at all time points. High-powered continuous laser/light resulted in increased CK activity compared with placebo from 1 to 24 hours (P < .05).   Low-powered pulsed laser/light demonstrated better results than either low-powered continuous laser/light or high-powered continuous laser/light in all outcome measures when compared with placebo. The increase in CK activity using the high-powered continuous laser/light compared with placebo warrants further research to investigate its effect on other factors related to muscle damage.

  16. Systemic Approach to Elevation Data Acquisition for Geophysical Survey Alignments in Hilly Terrains Using UAVs

    NASA Astrophysics Data System (ADS)

    Ismail, M. A. M.; Kumar, N. S.; Abidin, M. H. Z.; Madun, A.

    2018-04-01

    This study is about systematic approach to photogrammetric survey that is applicable in the extraction of elevation data for geophysical surveys in hilly terrains using Unmanned Aerial Vehicles (UAVs). The outcome will be to acquire high-quality geophysical data from areas where elevations vary by locating the best survey lines. The study area is located at the proposed construction site for the development of a water reservoir and related infrastructure in Kampus Pauh Putra, Universiti Malaysia Perlis. Seismic refraction surveys were carried out for the modelling of the subsurface for detailed site investigations. Study were carried out to identify the accuracy of the digital elevation model (DEM) produced from an UAV. At 100 m altitude (flying height), over 135 overlapping images were acquired using a DJI Phantom 3 quadcopter. All acquired images were processed for automatic 3D photo-reconstruction using Agisoft PhotoScan digital photogrammetric software, which was applied to all photogrammetric stages. The products generated included a 3D model, dense point cloud, mesh surface, digital orthophoto, and DEM. In validating the accuracy of the produced DEM, the coordinates of the selected ground control point (GCP) of the survey line in the imaging area were extracted from the generated DEM with the aid of Global Mapper software. These coordinates were compared with the GCPs obtained using a real-time kinematic global positioning system. The maximum percentage of difference between GCP’s and photogrammetry survey is 13.3 %. UAVs are suitable for acquiring elevation data for geophysical surveys which can save time and cost.

  17. The effects of material loading and flow rate on the disinfection of pathogenic microorganisms using cation resin-silver nanoparticle filter system

    NASA Astrophysics Data System (ADS)

    Mpenyana-Monyatsi, L.; Mthombeni, N. H.; Onyango, M. S.; Momba, M. N. B.

    2017-08-01

    Waterborne diseases have a negative impact on public health in instances where the available drinking water is of a poor quality. Decentralised systems are needed to provide safe drinking water to rural communities. Therefore, the present study aimed to develop and investigate the point-of-use (POU) water treatment filter packed with resin-coated silver nanoparticles. The filter performance was evaluated by investigating the effects of various bed masses (10 g, 15 g, 20 g) and flow rates (2 mL/min, 5 mL/min, 10 mL/min) by means of breakthrough curves for the removal efficiency of presumptive Escherichia coli, Shigella dysenteriae, Salmonella typhimurium and Vibrio cholerae from spiked groundwater samples. The results revealed that, as the bed mass increases the breakthrough time also increases with regards to all targeted microorganisms. However, when the flow rate increases the breakthrough time decreased. These tests demonstrated that resin-coated silver nanoparticle can be an effective material in removing all targeted microorganisms at 100% removal efficiency before breakthrough points are achieved. Moreover the filter system demonstrated that it is capable of producing 15 L/day of treated water at an operating condition of 10 mL/min flow rate and 15 g bed mass, which is sufficient to provide for seven individuals in the household if they consume 2 L/person/day for drinking purpose. Therefore, the bed mass of the filter system should be increased in order for it to produce sufficient water that will conform to the daily needs of an individual.

  18. Synergistic microbial consortium for bioenergy generation from complex natural energy sources.

    PubMed

    Wang, Victor Bochuan; Yam, Joey Kuok Hoong; Chua, Song-Lin; Zhang, Qichun; Cao, Bin; Chye, Joachim Loo Say; Yang, Liang

    2014-01-01

    Microbial species have evolved diverse mechanisms for utilization of complex carbon sources. Proper combination of targeted species can affect bioenergy production from natural waste products. Here, we established a stable microbial consortium with Escherichia coli and Shewanella oneidensis in microbial fuel cells (MFCs) to produce bioenergy from an abundant natural energy source, in the form of the sarcocarp harvested from coconuts. This component is mostly discarded as waste. However, through its usage as a feedstock for MFCs to produce useful energy in this study, the sarcocarp can be utilized meaningfully. The monospecies S. oneidensis system was able to generate bioenergy in a short experimental time frame while the monospecies E. coli system generated significantly less bioenergy. A combination of E. coli and S. oneidensis in the ratio of 1:9 (v:v) significantly enhanced the experimental time frame and magnitude of bioenergy generation. The synergistic effect is suggested to arise from E. coli and S. oneidensis utilizing different nutrients as electron donors and effect of flavins secreted by S. oneidensis. Confocal images confirmed the presence of biofilms and point towards their importance in generating bioenergy in MFCs.

  19. Composition pulse time-of-flight mass flow sensor

    DOEpatents

    Harnett, Cindy K [Livermore, CA; Crocker, Robert W [Fremont, CA; Mosier, Bruce P [San Francisco, CA; Caton, Pamela F [Berkeley, CA; Stamps, James F [Livermore, CA

    2007-06-05

    A device for measuring fluid flow rates over a wide range of flow rates (<1 nL/min to >10 .mu.L/min) and at pressures at least as great as 2,000 psi. The invention is particularly adapted for use in microfluidic systems. The device operates by producing compositional variations in the fluid, or pulses, that are subsequently detected downstream from the point of creation to derive a flow rate. Each pulse, comprising a small fluid volume, whose composition is different from the mean composition of the fluid, can be created by electrochemical means, such as by electrolysis of a solvent, electrolysis of a dissolved species, or electrodialysis of a dissolved ionic species. Measurements of the conductivity of the fluid can be used to detect the arrival time of the pulses, from which the fluid flow rate can be determined. A pair of spaced apart electrodes can be used to produce the electrochemical pulse. In those instances where it is desired to measure a wide range of fluid flow rates a three electrode configuration in which the electrodes are spaced at unequal distances has been found to be desirable.

  20. AceTree: a tool for visual analysis of Caenorhabditis elegans embryogenesis

    PubMed Central

    Boyle, Thomas J; Bao, Zhirong; Murray, John I; Araya, Carlos L; Waterston, Robert H

    2006-01-01

    Background The invariant lineage of the nematode Caenorhabditis elegans has potential as a powerful tool for the description of mutant phenotypes and gene expression patterns. We previously described procedures for the imaging and automatic extraction of the cell lineage from C. elegans embryos. That method uses time-lapse confocal imaging of a strain expressing histone-GFP fusions and a software package, StarryNite, processes the thousands of images and produces output files that describe the location and lineage relationship of each nucleus at each time point. Results We have developed a companion software package, AceTree, which links the images and the annotations using tree representations of the lineage. This facilitates curation and editing of the lineage. AceTree also contains powerful visualization and interpretive tools, such as space filling models and tree-based expression patterning, that can be used to extract biological significance from the data. Conclusion By pairing a fast lineaging program written in C with a user interface program written in Java we have produced a powerful software suite for exploring embryonic development. PMID:16740163

  1. AceTree: a tool for visual analysis of Caenorhabditis elegans embryogenesis.

    PubMed

    Boyle, Thomas J; Bao, Zhirong; Murray, John I; Araya, Carlos L; Waterston, Robert H

    2006-06-01

    The invariant lineage of the nematode Caenorhabditis elegans has potential as a powerful tool for the description of mutant phenotypes and gene expression patterns. We previously described procedures for the imaging and automatic extraction of the cell lineage from C. elegans embryos. That method uses time-lapse confocal imaging of a strain expressing histone-GFP fusions and a software package, StarryNite, processes the thousands of images and produces output files that describe the location and lineage relationship of each nucleus at each time point. We have developed a companion software package, AceTree, which links the images and the annotations using tree representations of the lineage. This facilitates curation and editing of the lineage. AceTree also contains powerful visualization and interpretive tools, such as space filling models and tree-based expression patterning, that can be used to extract biological significance from the data. By pairing a fast lineaging program written in C with a user interface program written in Java we have produced a powerful software suite for exploring embryonic development.

  2. [Industrial social diseases in 19th century].

    PubMed

    Leoni, F

    1991-01-01

    The author illustrates the relations in Italy between industry and the medical-hygienic situation in the XIX century. Italy started industrial processes rather late, about 1840, and between 1840 and 1870, for the first time, a remarkable quantity of publications about working class life conditions appeared. Special attention was given to spinning-mill workers, who - as Tonini, Ripa and Bonomi describe in their treatises - suffered a very hard life and working conditions, cold, damp, a very poor diet based on stale bread; furthermore, women had dangerous pregnancies and their babies were extremely undernourished, because of bottle-feeding caused by the impossibility of mothers to take their infants with them. These conditions produced numerous gastric, rheumatic and respiratory diseases. At the end of the XIX century, Mantegazzaa defined, for the first time, professional diseases from a clinical and social point of view. Investigations acquired a more rigorous and scientific character by dividing into a series of subjects such as, for instance, the study of "unhealthy industries." Legislation was adapted quite late, and produced in 1888 the "Crispi act".

  3. Experimental Verification of Bayesian Planet Detection Algorithms with a Shaped Pupil Coronagraph

    NASA Astrophysics Data System (ADS)

    Savransky, D.; Groff, T. D.; Kasdin, N. J.

    2010-10-01

    We evaluate the feasibility of applying Bayesian detection techniques to discovering exoplanets using high contrast laboratory data with simulated planetary signals. Background images are generated at the Princeton High Contrast Imaging Lab (HCIL), with a coronagraphic system utilizing a shaped pupil and two deformable mirrors (DMs) in series. Estimates of the electric field at the science camera are used to correct for quasi-static speckle and produce symmetric high contrast dark regions in the image plane. Planetary signals are added in software, or via a physical star-planet simulator which adds a second off-axis point source before the coronagraph with a beam recombiner, calibrated to a fixed contrast level relative to the source. We produce a variety of images, with varying integration times and simulated planetary brightness. We then apply automated detection algorithms such as matched filtering to attempt to extract the planetary signals. This allows us to evaluate the efficiency of these techniques in detecting planets in a high noise regime and eliminating false positives, as well as to test existing algorithms for calculating the required integration times for these techniques to be applicable.

  4. Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise

    NASA Astrophysics Data System (ADS)

    Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej

    2010-11-01

    The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.

  5. Separation of Dynamics in the Free Energy Landscape

    NASA Astrophysics Data System (ADS)

    Ekimoto, Toru; Odagaki, Takashi; Yoshimori, Akira

    2008-02-01

    The dynamics of a representative point in a model free energy landscape (FEL) is analyzed by the Langevin equation with the FEL as the driving potential. From the detailed analysis of the generalized susceptibility, fast, slow and Johari-Goldstein (JG) processes are shown to be well described by the FEL. Namely, the fast process is determined by the stochastic motion confined in a basin of the FEL and the relaxation time is related to the curvature of the FEL at the bottom of the basin. The jump motion among basins gives rise to the slow relaxation whose relaxation time is determined by the distribution of the barriers in the FEL and the JG process is produced by weak modulation of the FEL.

  6. Controlling Chaos Via Knowledge of Initial Condition for a Curved Structure

    NASA Technical Reports Server (NTRS)

    Maestrello, L.

    2000-01-01

    Nonlinear response of a flexible curved panel exhibiting bifurcation to fully developed chaos is demonstrated along with the sensitivity to small perturbation from the initial conditions. The response is determined from the measured time series at two fixed points. The panel is forced by an external nonharmonic multifrequency and monofrequency sound field. Using a low power time-continuous feedback control, carefully tuned at each initial condition, produces large long-term effects on the dynamics toward taming chaos. Without the knowledge of the initial conditions, control may be achieved by destructive interference. In this case, the control power is proportional to the loading power. Calculation of the correlation dimension and the estimation of positive Lyapunov exponents, in practice, are the proof of chaotic response.

  7. Statistical approaches to the analysis of point count data: a little extra information can go a long way

    Treesearch

    George L. Farnsworth; James D. Nichols; John R. Sauer; Steven G. Fancy; Kenneth H. Pollock; Susan A. Shriner; Theodore R. Simons

    2005-01-01

    Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point...

  8. Comparative Effectiveness of Tai Chi Versus Physical Therapy for Knee Osteoarthritis

    PubMed Central

    Wang, Chenchen; Schmid, Christopher H.; Iversen, Maura D.; Harvey, William F.; Fielding, Roger A.; Driban, Jeffrey B.; Price, Lori Lyn; Wong, John B.; Reid, Kieran F.; Rones, Ramel; McAlindon, Timothy

    2016-01-01

    Background Few remedies effectively treat long-term pain and disability from knee osteoarthritis. Studies suggest that Tai Chi alleviates symptoms, but no trials have directly compared Tai Chi with standard therapies for osteoarthritis. Objective To compare Tai Chi with standard physical therapy for patients with knee osteoarthritis. Design Randomized, 52-week, single-blind comparative effectiveness trial. (ClinicalTrials.gov: NCT01258985) Setting An urban tertiary care academic hospital. Patients 204 participants with symptomatic knee osteoarthritis (mean age, 60 years; 70% women; 53% white). Intervention Tai Chi (2 times per week for 12 weeks) or standard physical therapy (2 times per week for 6 weeks, followed by 6 weeks of monitored home exercise). Measurements The primary outcome was Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) score at 12 weeks. Secondary outcomes included physical function, depression, medication use, and quality of life. Results At 12 weeks, the WOMAC score was substantially reduced in both groups (Tai Chi, 167 points [95% CI, 145 to 190 points]; physical therapy, 143 points [CI, 119 to 167 points]). The between-group difference was not significant (24 points [CI, −10 to 58 points]). Both groups also showed similar clinically significant improvement in most secondary outcomes, and the benefits were maintained up to 52 weeks. Of note, the Tai Chi group had significantly greater improvements in depression and the physical component of quality of life. The benefit of Tai Chi was consistent across instructors. No serious adverse events occurred. Limitation Patients were aware of their treatment group assignment, and the generalizability of the findings to other settings remains undetermined. Conclusion Tai Chi produced beneficial effects similar to those of a standard course of physical therapy in the treatment of knee osteoarthritis. Primary Funding Source National Center for Complementary and Integrative Health of the National Institutes of Health. PMID:27183035

  9. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution to the problem, is then obtained by solving in parallel each of the sub-problems in the set and computing the one with the minimum cost. In addition to speeding up the optimization process, our use of learning methods also relieves the expert from the burden of identifying rules that exactly pinpoint optimal candidate sub-problems. In real engineering tasks it is usually too costly to the engineers to derive such rules. Therefore, this paper also contributes to a further step towards the solution of the knowledge acquisition bottleneck [Feigenbaum, 1977] which has somewhat impaired the construction of rulebased expert systems.

  10. Calibration sets selection strategy for the construction of robust PLS models for prediction of biodiesel/diesel blends physico-chemical properties using NIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Palou, Anna; Miró, Aira; Blanco, Marcelo; Larraz, Rafael; Gómez, José Francisco; Martínez, Teresa; González, Josep Maria; Alcalà, Manel

    2017-06-01

    Even when the feasibility of using near infrared (NIR) spectroscopy combined with partial least squares (PLS) regression for prediction of physico-chemical properties of biodiesel/diesel blends has been widely demonstrated, inclusion in the calibration sets of the whole variability of diesel samples from diverse production origins still remains as an important challenge when constructing the models. This work presents a useful strategy for the systematic selection of calibration sets of samples of biodiesel/diesel blends from diverse origins, based on a binary code, principal components analysis (PCA) and the Kennard-Stones algorithm. Results show that using this methodology the models can keep their robustness over time. PLS calculations have been done using a specialized chemometric software as well as the software of the NIR instrument installed in plant, and both produced RMSEP under reproducibility values of the reference methods. The models have been proved for on-line simultaneous determination of seven properties: density, cetane index, fatty acid methyl esters (FAME) content, cloud point, boiling point at 95% of recovery, flash point and sulphur.

  11. FIRST-ORDER COSMOLOGICAL PERTURBATIONS ENGENDERED BY POINT-LIKE MASSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eingorn, Maxim, E-mail: maxim.eingorn@gmail.com

    2016-07-10

    In the framework of the concordance cosmological model, the first-order scalar and vector perturbations of the homogeneous background are derived in the weak gravitational field limit without any supplementary approximations. The sources of these perturbations (inhomogeneities) are presented in the discrete form of a system of separate point-like gravitating masses. The expressions found for the metric corrections are valid at all (sub-horizon and super-horizon) scales and converge at all points except at the locations of the sources. The average values of these metric corrections are zero (thus, first-order backreaction effects are absent). Both the Minkowski background limit and the Newtonianmore » cosmological approximation are reached under certain well-defined conditions. An important feature of the velocity-independent part of the scalar perturbation is revealed: up to an additive constant, this part represents a sum of Yukawa potentials produced by inhomogeneities with the same finite time-dependent Yukawa interaction range. The suggested connection between this range and the homogeneity scale is briefly discussed along with other possible physical implications.« less

  12. An optimization approach for observation association with systemic uncertainty applied to electro-optical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.

    2018-06-01

    The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.

  13. Theoretical model of dynamic spin polarization of nuclei coupled to paramagnetic point defects in diamond and silicon carbide

    NASA Astrophysics Data System (ADS)

    Ivády, Viktor; Szász, Krisztián; Falk, Abram L.; Klimov, Paul V.; Christle, David J.; Janzén, Erik; Abrikosov, Igor A.; Awschalom, David D.; Gali, Adam

    2015-09-01

    Dynamic nuclear spin polarization (DNP) mediated by paramagnetic point defects in semiconductors is a key resource for both initializing nuclear quantum memories and producing nuclear hyperpolarization. DNP is therefore an important process in the field of quantum-information processing, sensitivity-enhanced nuclear magnetic resonance, and nuclear-spin-based spintronics. DNP based on optical pumping of point defects has been demonstrated by using the electron spin of nitrogen-vacancy (NV) center in diamond, and more recently, by using divacancy and related defect spins in hexagonal silicon carbide (SiC). Here, we describe a general model for these optical DNP processes that allows the effects of many microscopic processes to be integrated. Applying this theory, we gain a deeper insight into dynamic nuclear spin polarization and the physics of diamond and SiC defects. Our results are in good agreement with experimental observations and provide a detailed and unified understanding. In particular, our findings show that the defect electron spin coherence times and excited state lifetimes are crucial factors in the entire DNP process.

  14. Experimental investigation on fuel properties of biodiesel prepared from cottonseed oil

    NASA Astrophysics Data System (ADS)

    Payl, Ashish Naha; Mashud, Mohammad

    2017-06-01

    In recent time's world's energy demands are satisfied by coal, natural gas as well as petroleum though the prices of these are escalating. If this continues, global recession is unavoidable and diminution of world reserve accelerates undoubtedly. Recently, Biodiesel is found to be more sustainable, non-toxic and energy efficient alternative which is also biodegradable. The use of biofuels in compression ignition engines is now a contemplation attention in place of petrochemicals. In view of this, cottonseed oil is quite a favorable candidate as an alternative fuel. The present study covers the various aspects of biodiesels fuel prepared from cottonseed oil. In this work Biodiesel was prepared from cottonseed oil through transesterification process with methanol, using sodium hydroxide as catalyst. The fuel properties of cottonseed oil methyl esters, kinematic viscosity, flash point, density, calorific value, boiling point etc. were evaluated and discussed in the light of Conventional Diesel Fuel. The properties of biodiesel produced from cotton seed oil are quite close to that of diesel except from flash point. And so the methyl esters of cottonseed oil can be used in existing diesel engines without any modifications.

  15. Fast generation of complex modulation video holograms using temporal redundancy compression and hybrid point-source/wave-field approaches

    NASA Astrophysics Data System (ADS)

    Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce

    2015-09-01

    The hybrid point-source/wave-field method is a newly proposed approach for Computer-Generated Hologram (CGH) calculation, based on the slicing of the scene into several depth layers parallel to the hologram plane. The complex wave scattered by each depth layer is then computed using either a wave-field or a point-source approach according to a threshold criterion on the number of points within the layer. Finally, the complex waves scattered by all the depth layers are summed up in order to obtain the final CGH. Although outperforming both point-source and wave-field methods without producing any visible artifact, this approach has not yet been used for animated holograms, and the possible exploitation of temporal redundancies has not been studied. In this paper, we propose a fast computation of video holograms by taking into account those redundancies. Our algorithm consists of three steps. First, intensity and depth data of the current 3D video frame are extracted and compared with those of the previous frame in order to remove temporally redundant data. Then the CGH pattern for this compressed frame is generated using the hybrid point-source/wave-field approach. The resulting CGH pattern is finally transmitted to the video output and stored in the previous frame buffer. Experimental results reveal that our proposed method is able to produce video holograms at interactive rates without producing any visible artifact.

  16. Premedication with oral alprazolam and melatonin combination: a comparison with either alone--a randomized controlled factorial trial.

    PubMed

    Pokharel, Krishna; Tripathi, Mukesh; Gupta, Pramod Kumar; Bhattarai, Balkrishna; Khatiwada, Sindhu; Subedi, Asish

    2014-01-01

    We assessed if the addition of melatonin to alprazolam has superior premedication effects compared to either drug alone. A prospective, double blind placebo controlled trial randomly assigned 80 adult patients (ASA 1&2) with a Visual Analogue Score (VAS) for anxiety ≥ 3 to receive a tablet containing a combination of alprazolam 0.5 mg and melatonin 3 mg, alprazolam 0.5 mg, melatonin 3 mg, or placebo orally 90 min before a standard anesthetic. Primary end points were change in anxiety and sedation score at 15, 30, and 60 min after premedication, and number of patients with loss of memory for the five pictures shown at various time points when assessed after 24 h. One-way ANOVA, Friedman repeated measures analysis of variance, Kruskal Wallis and chi square tests were used as relevant. Combination drug produced the maximum reduction in anxiety VAS (3 (1.0-4.3)) from baseline at 60 min (P < 0.05). Sedation scores at various time points and number of patients not recognizing the picture shown at 60 min after premedication were comparable between combination drug and alprazolam alone. Addition of melatonin to alprazolam had superior anxiolysis compared with either drugs alone or placebo. Adding melatonin neither worsened sedation score nor the amnesic effect of alprazolam alone. This study was registered, approved, and released from ClinicalTrials.gov. Identifier number: NCT01486615.

  17. Behavioral characterization of 2-O-desmethyl and 5-O-desmethyl metabolites of the phenylethylamine hallucinogen DOM.

    PubMed

    Eckler, J R; Chang-Fong, J; Rabin, R A; Smith, C; Teitler, M; Glennon, R A; Winter, J C

    2003-07-01

    The present investigation was undertaken to test the hypothesis that known metabolites of the phenylethylamine hallucinogen 1-(2,5-dimethoxy-4-methylphenyl)-2-aminopropane (DOM) are pharmacologically active. This hypothesis was tested by evaluating the ability of racemic DOM metabolites 2-O-desmethyl DOM (2-DM-DOM) and 5-O-desmethyl DOM (5-DM-DOM) to substitute for the stimulus properties of (+)lysergic acid diethylamide (LSD). The data indicate that both metabolites are active in LSD-trained subjects and are significantly inhibited by the selective 5-HT(2A) receptor antagonist M100907. Full generalization of LSD to both 2-DM-DOM and 5-DM-DOM occurred, and 5-DM-DOM was slightly more potent than 2-DM-DOM. Similarly, 5-DM-DOM had a slightly higher affinity than 2-DM-DOM for both 5-HT(2A) and 5-HT(2C) receptors. Additionally, it was of interest to determine if the formation of active metabolite(s) resulted in a temporal delay associated with maximal stimulus effects of DOM. We postulated that if metabolite formation resulted in the aforementioned delay, direct administration of the metabolites might result in maximally stable stimulus effects at an earlier pretreatment time. This hypothesis was tested by evaluating (1) the time point at which DOM produces the greatest degree of LSD-appropriate responding, (2) the involvement of 5-HT(2A) receptor in the stimulus effects of DOM at various pretreatment times by administration of M100907 and (3) the ability of 2-DM-DOM and 5-DM-DOM to substitute for the stimulus properties of LSD using either 15- or 75-min pretreatment time. The data indicate that (a) the DOM stimulus produces the greatest degree of LSD-appropriate responding at the 75-min time point in comparison with earlier pretreatment times and (b) the stimulus effects of DOM are differentially antagonized by M100907 and this effect is a function of DOM pretreatment time prior to testing. Both 2-DM-DOM and 5-DM-DOM were found to be most active, at all doses tested, using a 75-min versus a 15-min pretreatment time. The present data do not permit unequivocal acceptance or rejection of the hypothesis that active metabolites of (-)-DOM provide a full explanation of the observed discrepancy between brain levels of (-)-DOM and maximal stimulus effects.

  18. A two-stage method of quantitative flood risk analysis for reservoir real-time operation using ensemble-based hydrologic forecasts

    NASA Astrophysics Data System (ADS)

    Liu, P.

    2013-12-01

    Quantitative analysis of the risk for reservoir real-time operation is a hard task owing to the difficulty of accurate description of inflow uncertainties. The ensemble-based hydrologic forecasts directly depict the inflows not only the marginal distributions but also their persistence via scenarios. This motivates us to analyze the reservoir real-time operating risk with ensemble-based hydrologic forecasts as inputs. A method is developed by using the forecast horizon point to divide the future time into two stages, the forecast lead-time and the unpredicted time. The risk within the forecast lead-time is computed based on counting the failure number of forecast scenarios, and the risk in the unpredicted time is estimated using reservoir routing with the design floods and the reservoir water levels of forecast horizon point. As a result, a two-stage risk analysis method is set up to quantify the entire flood risks by defining the ratio of the number of scenarios that excessive the critical value to the total number of scenarios. The China's Three Gorges Reservoir (TGR) is selected as a case study, where the parameter and precipitation uncertainties are implemented to produce ensemble-based hydrologic forecasts. The Bayesian inference, Markov Chain Monte Carlo, is used to account for the parameter uncertainty. Two reservoir operation schemes, the real operated and scenario optimization, are evaluated for the flood risks and hydropower profits analysis. With the 2010 flood, it is found that the improvement of the hydrologic forecast accuracy is unnecessary to decrease the reservoir real-time operation risk, and most risks are from the forecast lead-time. It is therefore valuable to decrease the avarice of ensemble-based hydrologic forecasts with less bias for a reservoir operational purpose.

  19. Can Real-Time Data Also Be Climate Quality?

    NASA Astrophysics Data System (ADS)

    Brewer, M.; Wentz, F. J.

    2015-12-01

    GMI, AMSR-2 and WindSat herald a new era of highly accurate and timely microwave data products. Traditionally, there has been a large divide between real-time and re-analysis data products. What if these completely separate processing systems could be merged? Through advanced modeling and physically based algorithms, Remote Sensing Systems (RSS) has narrowed the gap between real-time and research-quality. Satellite microwave ocean products have proven useful for a wide array of timely Earth science applications. Through cloud SST capabilities have enormously benefited tropical cyclone forecasting and day to day fisheries management, to name a few. Oceanic wind vectors enhance operational safety of shipping and recreational boating. Atmospheric rivers are of import to many human endeavors, as are cloud cover and knowledge of precipitation events. Some activities benefit from both climate and real-time operational data used in conjunction. RSS has been consistently improving microwave Earth Science Data Records (ESDRs) for several decades, while making near real-time data publicly available for semi-operational use. These data streams have often been produced in 2 stages: near real-time, followed by research quality final files. Over the years, we have seen this time delay shrink from months or weeks to mere hours. As well, we have seen the quality of near real-time data improve to the point where the distinction starts to blur. We continue to work towards better and faster RFI filtering, adaptive algorithms and improved real-time validation statistics for earlier detection of problems. Can it be possible to produce climate quality data in real-time, and what would the advantages be? We will try to answer these questions…

  20. Using spatiotemporal source separation to identify prominent features in multichannel data without sinusoidal filters.

    PubMed

    Cohen, Michael X

    2017-09-27

    The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

Top