NASA Technical Reports Server (NTRS)
Taff, L. G.
1998-01-01
Since the announcement of the discovery of sources of bursts of gamma-ray radiation in 1973, hundreds more reports of such bursts have now been published. Numerous artificial satellites have been equipped with gamma-ray detectors including the very successful Compton Gamma Ray Observatory BATSE instrument. Unfortunately, we have made no progress in identifying the source(s) of this high energy radiation. We suspected that this was a consequence of the method used to define gamma-ray burst source "error boxes." An alternative procedure to compute gamma-ray burst source positions, with a purely physical underpinning, was proposed in 1988 by Taff. Since then we have also made significant progress in understanding the analytical nature of the triangulation problem and in computing actual gamma-ray burst positions and their corresponding error boxes. For the former, we can now mathematically illustrate the crucial role of the area occupied by the detectors, while for the latter, the Atteia et al. (1987) catalog has been completely re-reduced. There are very few discrepancies in locations between our results and those of the customary "time difference of arrival" procedure. Thus, we have numerically demonstrated that the end result, for the positions, of these two very different-looking procedures is the same. Finally, for the first time, we provide a sample of realistic "error boxes" whose non-simple shapes vividly portray the difficulty of burst source localization.
NASA Technical Reports Server (NTRS)
Boer, M.; Hurley, K.; Pizzichini, G.; Gottardi, M.
1991-01-01
Exosat observations are presented for 3 gamma-ray-burst error boxes, one of which may be associated with an optical flash. No point sources were detected at the 3-sigma level. A comparison with Einstein data (Pizzichini et al., 1986) is made for the March 5b, 1979 source. The data are interpreted in the framework of neutron star models and derive upper limits for the neutron star surface temperatures, accretion rates, and surface densities of an accretion disk. Apart from the March 5b, 1979 source, consistency is found with each model.
An Unusual Supernova in the Error Box of the Gamma-Ray Burst of 25 April 1998
NASA Technical Reports Server (NTRS)
Galama , T. J.; Vreeswijk, P. M.; vanParadijs, J.; Kouveliotou, C.; Augusteijn, T.; Boehnhardt, H.; Brewer, J. P.; Doublier, V.; Gonzalez, J.-F.; Leibundgut, B.;
1999-01-01
The discovery of afterglows associated with gamma-ray bursts at X-ray, optical and radio wavelengths and the measurement of the redshifts of some of these events has established that gamma-ray bursts lie at extreme distances, making them the most powerful photon-emitters known in the Universe. Here we report the discovery of transient optical emission in the error box of the gamma-ray burst GRB980425, the light curve of which was very different from that of previous optical afterglows associated with gamma-ray bursts. The optical transient is located in a spiral arm of the galaxy ESO 184-GS2, which has a redshift velocity of only 2,550 km/ s. Its optical spectrum and location indicate that it is a very luminous supernova, which has been identified as SN1998bw. If this supernova and GRB980425 are indeed associated, the energy radiated in gamma-rays is at least four orders of magnitude less than in other gamma-ray bursts, although its appearance was otherwise unremarkable: this indicates that very different mechanisms can give rise to gamma-ray bursts. But independent of this association, the supernova is itself unusual, exhibiting an unusual light curve at radio wavelengths that requires that the gas emitting the radio photons be expanding relativistically.
ROSAT X-Ray Observation of the Second Error Box for SGR 1900+14
NASA Technical Reports Server (NTRS)
Li, P.; Hurley, K.; Vrba, F.; Kouveliotou, C.; Meegan, C. A.; Fishman, G. J.; Kulkarni, S.; Frail, D.
1997-01-01
The positions of the two error boxes for the soft gamma repeater (SGR) 1900+14 were determined by the "network synthesis" method, which employs observations by the Ulysses gamma-ray burst and CGRO BATSE instruments. The location of the first error box has been observed at optical, infrared, and X-ray wavelengths, resulting in the discovery of a ROSAT X-ray point source and a curious double infrared source. We have recently used the ROSAT HRI to observe the second error box to complete the counterpart search. A total of six X-ray sources were identified within the field of view. None of them falls within the network synthesis error box, and a 3 sigma upper limit to any X-ray counterpart was estimated to be 6.35 x 10(exp -14) ergs/sq cm/s. The closest source is approximately 3 min. away, and has an estimated unabsorbed flux of 1.5 x 10(exp -12) ergs/sq cm/s. Unlike the first error box, there is no supernova remnant near the second error box. The closest one, G43.9+1.6, lies approximately 2.dg6 away. For these reasons, we believe that the first error box is more likely to be the correct one.
Rossi X-Ray Timing Explorer All-Sky Monitor Localization of SGR 1627-41
NASA Astrophysics Data System (ADS)
Smith, Donald A.; Bradt, Hale V.; Levine, Alan M.
1999-07-01
The fourth unambiguously identified soft gamma repeater (SGR), SGR 1627-41, was discovered with the BATSE instrument on 1998 June 15. Interplanetary Network (IPN) measurements and BATSE data constrained the location of this new SGR to a 6° segment of a narrow (19") annulus. We present two bursts from this source observed by the All-Sky Monitor (ASM) on the Rossi X-Ray Timing Explorer. We use the ASM data to further constrain the source location to a 5' long segment of the BATSE/IPN error box. The ASM/IPN error box lies within 0.3 arcmin of the supernova remnant G337.0-0.1. The probability that a supernova remnant would fall so close to the error box purely by chance is ~5%.
RXTE All-Sky Monitor Localization of SGR 1627-41
NASA Astrophysics Data System (ADS)
Smith, D. A.; Bradt, H. V.; Levine, A. M.
1999-09-01
The fourth unambiguously identified Soft Gamma Repeater (SGR), SGR 1627--41, was discovered with the BATSE instrument on 1998 June 15 (Kouveliotou et al. 1998). Interplanetary Network (IPN) measurements and BATSE data constrained the location of this new SGR to a 6(deg) segment of a narrow (19('') ) annulus (Hurley et al. 1999; Woods et al. 1998). We report on two bursts from this source observed by the All-Sky Monitor (ASM) on RXTE. We use the ASM data to further constrain the source location to a 5(') long segment of the BATSE/IPN error box. The ASM/IPN error box lies within 0.3(') of the supernova remnant (SNR) G337.0--0.1. The probability that a SNR would fall so close to the error box purely by chance is ~ 5%.
NASA Technical Reports Server (NTRS)
Hurley, K.; Briggs, M.; Connaughton, V.; Meegan, C.; von Kienlin, A.; Rau, A.; Zhang, X.; Golenetskii, S.; Aptekar, R.; Mazets, E.;
2012-01-01
In the first two years of operation of the Fermi GBM, the 9-spacecraft Interplanetary Network (IPN) detected 158 GBM bursts with one or two distant spacecraft, and triangulated them to annuli or error boxes. Combining the IPN and GBM localizations leads to error boxes which are up to 4 orders of magnitude smaller than those of the GBM alone. These localizations comprise the IPN supplement to the GBM catalog, and they support a wide range of scientific investigations.
BALLERINA - doing Pirouettes for the Gamma-Bursts
NASA Astrophysics Data System (ADS)
Lund, Niels; Ballerina Consortium
1998-12-01
BALLERINA is a satellite project currently selected (together with 3 other candidates) for a five month phase-A study within the Danish Small-Satellite Programme. BALLERINA combines an all-sky monitor yielding instantaneous half-degree size error boxes with rapid maneuverability and a wide field X-ray telescope. The project aims to study the transition phase from the gamma-burst to the afterglow phase, and to distribute sub-arcminute positions for the bursts in near real time. We expect to be able to lock-on to the source with the X-ray telescope in less than 3 minutes from the trigger, and to provide the accurate burst position to the general astronomical community within 10 minutes. While waiting for the bursts we plan to study other transient and persistent X-ray sources .
A second catalog of gamma ray bursts: 1978 - 1980 localizations from the interplanetary network
NASA Technical Reports Server (NTRS)
Atteia, J. L.; Barat, C.; Hurley, K.; Niel, M.; Vedrenne, G.; Evans, W. D.; Fenimore, E. E.; Klebesadel, R. W.; Laros, J. G.; Cline, T. L.
1985-01-01
Eighty-two gamma ray bursts were detected between 1978 September 14 and 1980 February 13 by the experiments of the interplanetary network (Prognoz 7, Venera 11 and 12 SIGNE experiments, Pioneer Venus Orbiter, International Sun-Earth Explorer 3, Helios 2, and Vela). Sixty-five of these events have been localized to annuli or error boxes by the method of arrival time analysis. The distribution of sources is consistent with isotropy, and there is no statistically convincing evidence for the detection of more than one burst from any source position. The localizations are compared with those of two previous catalogs.
The InterPlanetary Network Supplement to the Second Fermi GBM Catalog of Cosmic Gamma-Ray Bursts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurley, K.; Aptekar, R. L.; Golenetskii, S. V.
InterPlanetary Network (IPN) data are presented for the gamma-ray bursts in the second Fermi Gamma-Ray Burst Monitor (GBM) catalog. Of the 462 bursts in that catalog between 2010 July 12 and 2012 July 11, 428, or 93%, were observed by at least 1 other instrument in the 9-spacecraft IPN. Of the 428, the localizations of 165 could be improved by triangulation. For these bursts, triangulation gives one or more annuli whose half-widths vary between about 2.′3° and 16°, depending on the peak flux, fluence, time history, arrival direction, and the distance between the spacecraft. We compare the IPN localizations withmore » the GBM 1 σ , 2 σ , and 3 σ error contours and find good agreement between them. The IPN 3 σ error boxes have areas between about 8 square arcminutes and 380 square degrees, and are an average of 2500 times smaller than the corresponding GBM 3 σ localizations. We identify four bursts in the IPN/GBM sample whose origins were given as “uncertain,” but may in fact be cosmic. This leads to an estimate of over 99% completeness for the GBM catalog.« less
The Interplanetary Network II: 11 Months of Rapid, Precise GRB Localizations
NASA Astrophysics Data System (ADS)
Hurley, K.; Cline, T.; Mazets, E.; Golenetskii, S.; Trombka, J.; Feroci, M.; Kippen, R. M.; Barthelmy, S.; Frontera, F.; Guidorzi, C.; Montanari, E.
2000-10-01
Since December 1999 the 3rd Interplanetary Network has been producing small ( 10') error boxes at a rate of about one per week, and circulating them rapidly ( 24 h) via the GCN. As of June 2000, 24 such error boxes have been obtained; 18 of them have been searched in the radio and optical ranges for counterparts, resulting in four definite counterpart detections and three redshift determinations. We will review these results and explain the some of the lesser known IPN operations. In particular, we maintain an "early warning" list of potential observers with pagers and cell phones, and send messages to them to alert them to bursts for which error boxes will be obtained, allowing them to prepare for observations many hours before the complete spacecraft data are received and the GCN message is issued. As an interesting aside, now that the CGRO mission is terminated, the IPN consists entirely of non-NASA and/or non-astrophysics missions, specifically, Ulysses and Wind (Space Physics), NEAR (Planetary Physics), and BeppoSAX (ASI).
NASA Technical Reports Server (NTRS)
Gorosabel, J.; Fynbo, J. U.; Hjorth, J.; Wolf, C.; Andersen, M. I.; Pedersen, H.; Christensen, L.; Jensen, B. L.; Moller, P.; Afonso, J.;
2001-01-01
We report the discovery of the optical and near-infrared counterpart to GRB 001011. The GRB 001011 error box determined by Beppo-SAX was simultaneously imaged in the near-infrared by the 3.58-m. New Technology Telescope and in the optical by the 1.54-m Danish Telescope - 8 hr after the gamma-ray event. We implement the colour-colour discrimination technique proposed by Rhoads (2001) and extend it using near-IR data as well. We present the results provided by an automatic colour-colour discrimination pipe-line developed to discern the different populations of objects present in the GRB 001011 error box. Our software revealed three candidates based on single-epoch images. Second-epoch observations carried out approx. 3.2 days after the burst revealed that the most likely candidate had faded thus identifying it with the counterpart to the GRB. In deep R-band images obtained 7 months after the burst a faint (R=25.38 plus or minus 0.25) elongated object, presumably the host galaxy of GRB 001011, was detected at the position of the afterglow. The GRB 001011 afterglow is the first discovered with the assistance of colour-colour diagram techniques. We discuss the advantages of using this method and its application to boxes determined by future missions.
Constraints on an Optical Afterglow and on Supernova Light Following the Short Burst GRB 050813
NASA Technical Reports Server (NTRS)
Ferrero, P.; Sanchez, S. F.; Kann, D. A.; Klose, S.; Greiner, J.; Gorosabel, J.; Hartmann, D. H.; Henden, A. A.; Moller, P.; Palazzi, E.;
2006-01-01
We report early follow-up observations of the error box of the short burst 050813 using the telescopes at Calar Alto and at Observatorio Sierra Nevada (OSN), followed by deep VLT/FORS2 I-band observations obtained under very good seeing conditions 5.7 and 11.7 days after the event. No evidence for a GRB afterglow was found in our Calar Alto and OSN data, no rising supernova component was detected in our FORS2 images. A potential host galaxy can be identified in our FORS2 images, even though we cannot state with certainty its association with GRB 050813. IN any case, the optical afterglow of GRB 050813 was very faint, well in agreement with what is known so far about the optical properties of afterglows of short bursts. We conclude that all optical data are not in conflict with the interpretation that GRB 050813 was a short burst.
First gravitational-wave burst GW150914: MASTER optical follow-up observations
NASA Astrophysics Data System (ADS)
Lipunov, V. M.; Kornilov, V.; Gorbovskoy, E.; Buckley, D. A. H.; Tiurina, N.; Balanutsa, P.; Kuznetsov, A.; Greiner, J.; Vladimirov, V.; Vlasenko, D.; Chazov, V.; Kuvshinov, D.; Gabovich, A.; Potter, S. B.; Kniazev, A.; Crawford, S.; Rebolo Lopez, R.; Serra-Ricart, M.; Israelian, G.; Lodieu, N.; Gress, O.; Budnev, N.; Ivanov, K.; Poleschuk, V.; Yazev, S.; Tlatov, A.; Senik, V.; Yurkov, V.; Dormidontov, D.; Parkhomenko, A.; Sergienko, Yu.; Podesta, R.; Levato, H.; Lopez, C.; Saffe, C.; Podesta, F.; Mallamaci, C.
2017-03-01
The Advanced LIGO observatory recently reported the first direct detection of the gravitational waves (GWs) predicted by Einstein & Sitzungsber. We report on the first optical observations of the GW source GW150914 error region with the Global MASTER Robotic Net. Between the optical telescopes of electromagnetic support, the covered area is dominated by MASTER with an unfiltered magnitude up to 19.9 mag (5σ). We detected several optical transients, which proved to be unconnected with the GW event. The main input to investigate the final error box of GW150914 was made by the MASTER-SAAO robotic telescope, which covered 70 per cent of the final GW error box and 90 per cent of the common localization area of the LIGO and Fermi events. Our result is consistent with the conclusion (Abbott et al. 2016a) that GWs from GW150914 were produced in a binary black hole merger. At the same time, we cannot exclude that MASTER OT J040938.68-541316.9 exploded on 2015 September 14.
NASA Technical Reports Server (NTRS)
Ricker, George R.
1990-01-01
The Energetic Transient Array (ETA) is a concept for a dedicated interplanetary network of about 40 microsatellites ('space buoys') deployed in an about 1 AU radius solar orbit for the observation of cosmic gamma ray bursts (GRBs). Such a network is essential for the determination of highly accurate (about 0.1 arcsec) error boxes for GRBs. For each of about 100 bursts which would be detectable per year of observation by such a network, high resolution spectra could be obtained through the use of passively-cooled Ge gamma-ray detectors. Stabilization of each microsatellite would be achieved by a novel technique based on the radiation pressure exerted on 'featherable' solar paddles. It should be possible to have a fully functional array of satellites in place before the end of the decade for a total cost of about $20M, exclusive of launcher fees.
Gamma-Ray Burst Host Galaxies Have "Normal" Luminosities.
Schaefer
2000-04-10
The galactic environment of gamma-ray bursts can provide good evidence about the nature of the progenitor system, with two old arguments implying that the burst host galaxies are significantly subluminous. New data and new analysis have now reversed this picture: (1) Even though the first two known host galaxies are indeed greatly subluminous, the next eight hosts have absolute magnitudes typical for a population of field galaxies. A detailed analysis of the 16 known hosts (10 with redshifts) shows them to be consistent with a Schechter luminosity function with R*=-21.8+/-1.0, as expected for normal galaxies. (2) Bright bursts from the Interplanetary Network are typically 18 times brighter than the faint bursts with redshifts; however, the bright bursts do not have galaxies inside their error boxes to limits deeper than expected based on the luminosities for the two samples being identical. A new solution to this dilemma is that a broad burst luminosity function along with a burst number density varying as the star formation rate will require the average luminosity of the bright sample (>6x1058 photons s-1 or>1.7x1052 ergs s-1) to be much greater than the average luminosity of the faint sample ( approximately 1058 photons s-1 or approximately 3x1051 ergs s-1). This places the bright bursts at distances for which host galaxies with a normal luminosity will not violate the observed limits. In conclusion, all current evidence points to gamma-ray burst host galaxies being normal in luminosity.
New decoding methods of interleaved burst error-correcting codes
NASA Astrophysics Data System (ADS)
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
A search for Fermi bursts associated with supernovae and their frequency of occurrence
NASA Astrophysics Data System (ADS)
Kovacevic, M.; Izzo, L.; Wang, Y.; Muccino, M.; Della Valle, M.; Amati, L.; Barbarino, C.; Enderli, M.; Pisani, G. B.; Li, L.
2014-09-01
Context. Observations suggest that most long duration gamma-ray bursts (GRBs) are connected with broad-line supernovae Ib/c, (SNe-Ibc). The presence of GRB-SNe is revealed by rebrightenings emerging from the optical GRB afterglow 10-15 days, in the rest-frame of the source, after the prompt GRB emission. Aims: Fermi/GBM has a field of view (FoV) about 6.5 times larger than the FoV of Swift, therefore we expect that a number of GRB-SN connections have been missed because of lack of optical and X-ray instruments on board of Fermi, which are essential for revealing SNe associated with GRBs. This has motivated our search in the Fermi catalog for possible GRB-SN events. Methods: The search for possible GRB-SN associations follows two requirements: (1) SNe should fall inside the Fermi/GBM error box of the considered long GRB, and (2) this GRB should occur within 20 days before the SN event. Results: We have found five cases within z< 0.2 fulfilling the above reported requirements. One of them, GRB 130702A-SN 2013dx, was already known to have a GRB-SN association. We have analyzed the remaining four cases and we have concluded that three of them are, very likely, just random coincidences due to the Fermi/GBM large error box associated with each GRB detection. We found one GRB possibly associated with a SN 1998bw-like source, GRB 120121B/SN 2012ba. Conclusions: The very low redshift of GRB 120121B/SN 2012ba (z = 0.017) implies a low isotropic energy of this burst (Eiso = 1.39 × 1048) erg. We then compute the rate of Fermi low-luminosity GRBs connected with SNe to be ρ0,b ≤ 770 Gpc-3 yr-1. We estimate that Fermi/GBM could detect 1-4 GRBs-SNe within z ≤ 0.2 in the next 4 years.
Gamma Ray Burst Optical Counterpart Search Experiment (GROCSE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, H.S.; Ables, E.; Bionta, R.M.
GROCSE (Gamma-Ray Optical Counterpart Search Experiments) is a system of automated telescopes that search for simultaneous optical activity associated with gamma ray bursts in response to real-time burst notifications provided by the BATSE/BACODINE network. The first generation system, GROCSE 1, is sensitive down to Mv {approximately} 8.5 and requires an average of 12 seconds to obtain the first images of the gamma ray burst error box defined by the BACODINE trigger. The collaboration is now constructing a second generation system which has a 4 second slewing time and can reach Mv {approximately} 14 with a 5 second exposure. GROCSE 2more » consists of 4 cameras on a single mount. Each camera views the night sky through a commercial Canon lens (f/1.8, focal length 200 mm) and utilizes a 2K x 2K Loral CCD. Light weight and low noise custom readout electronics were designed and fabricated for these CCDs. The total field of view of the 4 cameras is 17.6 x 17.6 {degree}. GROCSE II will be operated by the end of 1995. In this paper, the authors present an overview of the GROCSE system and the results of measurements with a GROCSE 2 prototype unit.« less
The microchannel x-ray telescope status
NASA Astrophysics Data System (ADS)
Götz, D.; Meuris, A.; Pinsard, F.; Doumayrou, E.; Tourrette, T.; Osborne, J. P.; Willingale, R.; Sykes, J. M.; Pearson, J. F.; Le Duigou, J. M.; Mercier, K.
2016-07-01
We present design status of the Microchannel X-ray Telescope, the focussing X-ray telescope on board the Sino- French SVOM mission dedicated to Gamma-Ray Bursts. Its optical design is based on square micro-pore optics (MPOs) in a Lobster-Eye configuration. The optics will be coupled to a low-noise pnCCD sensitive in the 0.2{10 keV energy range. With an expected point spread function of 4.5 arcmin (FWHM) and an estimated sensitivity adequate to detect all the afterglows of the SVOM GRBs, MXT will be able to provide error boxes smaller than 60 (90% c.l.) arc sec after five minutes of observation.
Augmented burst-error correction for UNICON laser memory. [digital memory
NASA Technical Reports Server (NTRS)
Lim, R. S.
1974-01-01
A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.
NASA Astrophysics Data System (ADS)
Ricker, George R.
1990-08-01
The Energetic Transient Array (ETA) is a concept for a dedicated interplanetary network of ~40 microsatellites (``space buoys'') deployed in an ~1 AU radius solar orbit for the observation of cosmic gamma ray bursts (GRBs). Such a network is essential for the determination of highly accurate (~0.1 arc sec) error boxes for GRBs. For each of ~100 bursts which would be detectable per year of observation by such a network, high resolution (ΔE/E ~0.2% at 1 MeV) spectra could be obtained through the use of passively-cooled Ge gamma-ray detectors. Stabilization of each microsatellite would be achieved by a novel technique based on the radiation pressure exerted on ``featherable'' solar paddles. Because of the simplicity of the microsats, as well as the economics of mass production and the failure tolerance of such a network of independent satellites, a unit cost of ~$250 K per microsat can be anticipated. Should such a project be undertaken in the mid 1990's, possibly as an International mission, it should be possible to have a fully functional array of satellites in place before the end of the decade for a total cost of ~$20M, exclusive of launcher fees.
Analysis of error-correction constraints in an optical disk.
Roberts, J D; Ryley, A; Jones, D M; Burke, D
1996-07-10
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Analysis of error-correction constraints in an optical disk
NASA Astrophysics Data System (ADS)
Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David
1996-07-01
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Adaptive UEP and Packet Size Assignment for Scalable Video Transmission over Burst-Error Channels
NASA Astrophysics Data System (ADS)
Lee, Chen-Wei; Yang, Chu-Sing; Su, Yih-Ching
2006-12-01
This work proposes an adaptive unequal error protection (UEP) and packet size assignment scheme for scalable video transmission over a burst-error channel. An analytic model is developed to evaluate the impact of channel bit error rate on the quality of streaming scalable video. A video transmission scheme, which combines the adaptive assignment of packet size with unequal error protection to increase the end-to-end video quality, is proposed. Several distinct scalable video transmission schemes over burst-error channel have been compared, and the simulation results reveal that the proposed transmission schemes can react to varying channel conditions with less and smoother quality degradation.
Fast radio burst tied to distant dwarf galaxy (Image 2)
2017-06-07
Radio telescope at Arecibo only localized the fast radio burst to the area inside the two circles in this image, but the Very Large Array was able to pinpoint it as a dwarf galaxy within the square (shown at intersection of cross hairs in enlarged box)
Angular sensitivities of scintillator slab configurations for location of gamma ray bursts
NASA Technical Reports Server (NTRS)
Gregory, J. C.
1976-01-01
Thin flat scintillator slabs are a useful means of measuring the angular location of gamma ray fluxes of astronomical interest. A statistical estimate of position error was made of two scintillator systems suitable for gamma ray burst location from a balloon or satellite platform. A single rotating scintillator with associated flux monitor is compared with a pair of stationary orthogonal scintillators. Position error for a strong burst is of the order of a few arcmin if systematic errors are ignored.
NASA Astrophysics Data System (ADS)
Zhang, Kuiyuan; Umehara, Shigehiro; Yamaguchi, Junki; Furuta, Jun; Kobayashi, Kazutoshi
2016-08-01
This paper analyzes how body bias and BOX region thickness affect soft error rates in 65-nm SOTB (Silicon on Thin BOX) and 28-nm UTBB (Ultra Thin Body and BOX) FD-SOI processes. Soft errors are induced by alpha-particle and neutron irradiation and the results are then analyzed by Monte Carlo based simulation using PHITS-TCAD. The alpha-particle-induced single event upset (SEU) cross-section and neutron-induced soft error rate (SER) obtained by simulation are consistent with measurement results. We clarify that SERs decreased in response to an increase in the BOX thickness for SOTB while SERs in UTBB are independent of BOX thickness. We also discover SOTB develops a higher tolerance to soft errors when reverse body bias is applied while UTBB become more susceptible.
Analysis of S-box in Image Encryption Using Root Mean Square Error Method
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan
2012-07-01
The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes
Assessment of individual hand performance in box trainers compared to virtual reality trainers.
Madan, Atul K; Frantzides, Constantine T; Shervin, Nina; Tebbit, Christopher L
2003-12-01
Training residents in laparoscopic skills is ideally initiated in an inanimate laboratory with both box trainers and virtual reality trainers. Virtual reality trainers have the ability to score individual hand performance although they are expensive. Here we compared the ability to assess dominant and nondominant hand performance in box trainers with virtual reality trainers. Medical students without laparoscopic experience were utilized in this study (n = 16). Each student performed tasks on the LTS 2000, an inanimate box trainer (placing pegs with both hands and transferring pegs from one hand to another), as well as a task on the MIST-VR, a virtual reality trainer (grasping a virtual object and placing it in a virtual receptable with alternating hands). A surgeon scored students for the inanimate box trainer exercises (time and errors) while the MIST-VR scored students (time, economy of movements, and errors for each hand). Statistical analysis included Pearson correlations. Errors and time for the one-handed tasks on the box trainer did not correlate with errors, time, or economy measured for each hand by the MIST-VR (r = 0.01 to 0.30; P = NS). Total errors on the virtual reality trainer did correlate with errors on transferring pege (r = 0.61; P < 0.05). Economy and time of both dominant and nondominant hand from the MIST-VR correlated with time of transferring pegs in the box trainer (r = 0.53 to 0.77; P < 0.05). While individual hand assessment by the box trainer during 2-handed tasks was related to assessment by the virtual reality trainer, individual hand assessment during 1-handed tasks did not correlate with the virtual reality trainer. Virtual reality trainers, such as the MIST-VR, allow assessment of individual hand skills which may lead to improved laparoscopic skill acquisition. It is difficult to assess individual hand performance with box trainers alone.
SHOK—The First Russian Wide-Field Optical Camera in Space
NASA Astrophysics Data System (ADS)
Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N.
2018-02-01
Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body.
PBF Reactor Building (PER620) basement. Workers wearing protective gear work ...
PBF Reactor Building (PER-620) basement. Workers wearing protective gear work inside cubicle 13 on the fission product detection system. Man on left is atop shielded box shown in previous photo. Posture of second man illustrates waist-high height of shielding box. His hand rests on the access panel, which has been filled with lead bricks and which has been slid shut to enclose detection instruments within box. Photographer: John Capek. Date: January 24, 1983. INEEL negative no. 83-41-3-5 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
The Discovery of the Electromagnetic Counterpart of GW170817: Kilonova AT 2017gfo/DLT17ck
NASA Astrophysics Data System (ADS)
Valenti, Stefano; David; Sand, J.; Yang, Sheng; Cappellaro, Enrico; Tartaglia, Leonardo; Corsi, Alessandra; Jha, Saurabh W.; Reichart, Daniel E.; Haislip, Joshua; Kouprianov, Vladimir
2017-10-01
During the second observing run of the Laser Interferometer Gravitational-wave Observatory (LIGO) and Virgo Interferometer, a gravitational-wave signal consistent with a binary neutron star coalescence was detected on 2017 August 17th (GW170817), quickly followed by a coincident short gamma-ray burst trigger detected by the Fermi satellite. The Distance Less Than 40 (DLT40) Mpc supernova search performed pointed follow-up observations of a sample of galaxies regularly monitored by the survey that fell within the combined LIGO+Virgo localization region and the larger Fermi gamma-ray burst error box. Here we report the discovery of a new optical transient (DLT17ck, also known as SSS17a; it has also been registered as AT 2017gfo) spatially and temporally coincident with GW170817. The photometric and spectroscopic evolution of DLT17ck is unique, with an absolute peak magnitude of M r = -15.8 ± 0.1 and an r-band decline rate of 1.1 mag day-1. This fast evolution is generically consistent with kilonova models, which have been predicted as the optical counterpart to binary neutron star coalescences. Analysis of archival DLT40 data does not show any sign of transient activity at the location of DLT17ck down to r ˜ 19 mag in the time period between 8 months and 21 days prior to GW170817. This discovery represents the beginning of a new era for multi-messenger astronomy, opening a new path by which to study and understand binary neutron star coalescences, short gamma-ray bursts, and their optical counterparts.
Accuracy of a pulse-coherent acoustic Doppler profiler in a wave-dominated flow
Lacy, J.R.; Sherwood, C.R.
2004-01-01
The accuracy of velocities measured by a pulse-coherent acoustic Doppler profiler (PCADP) in the bottom boundary layer of a wave-dominated inner-shelf environment is evaluated. The downward-looking PCADP measured velocities in eight 10-cm cells at 1 Hz. Velocities measured by the PCADP are compared to those measured by an acoustic Doppler velocimeter for wave orbital velocities up to 95 cm s-1 and currents up to 40 cm s-1. An algorithm for correcting ambiguity errors using the resolution velocities was developed. Instrument bias, measured as the average error in burst mean speed, is -0.4 cm s-1 (standard deviation = 0.8). The accuracy (root-mean-square error) of instantaneous velocities has a mean of 8.6 cm s-1 (standard deviation = 6.5) for eastward velocities (the predominant direction of waves), 6.5 cm s-1 (standard deviation = 4.4) for northward velocities, and 2.4 cm s-1 (standard deviation = 1.6) for vertical velocities. Both burst mean and root-mean-square errors are greater for bursts with ub ??? 50 cm s-1. Profiles of burst mean speeds from the bottom five cells were fit to logarithmic curves: 92% of bursts with mean speed ??? 5 cm s-1 have a correlation coefficient R2 > 0.96. In cells close to the transducer, instantaneous velocities are noisy, burst mean velocities are biased low, and bottom orbital velocities are biased high. With adequate blanking distances for both the profile and resolution velocities, the PCADP provides sufficient accuracy to measure velocities in the bottom boundary layer under moderately energetic inner-shelf conditions.
Nada, Masahiro; Nakamura, Makoto; Matsuzaki, Hideaki
2014-01-13
25-Gbit/s error-free operation of an optical receiver is successfully demonstrated against burst-mode optical input signals without preambles. The receiver, with a high-sensitivity avalanche photodiode and burst-mode transimpedance amplifier, exhibits sufficient receiver sensitivity and an extremely quick response suitable for burst-mode operation in 100-Gbit/s optical packet switching.
The Discovery of the Electromagnetic Counterpart of GW170817: Kilonova AT 2017gfo/DLT17ck
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valenti, Stefano; Yang, Sheng; Tartaglia, Leonardo
During the second observing run of the Laser Interferometer Gravitational-wave Observatory (LIGO) and Virgo Interferometer, a gravitational-wave signal consistent with a binary neutron star coalescence was detected on 2017 August 17th (GW170817), quickly followed by a coincident short gamma-ray burst trigger detected by the Fermi satellite. The Distance Less Than 40 (DLT40) Mpc supernova search performed pointed follow-up observations of a sample of galaxies regularly monitored by the survey that fell within the combined LIGO+Virgo localization region and the larger Fermi gamma-ray burst error box. Here we report the discovery of a new optical transient (DLT17ck, also known as SSS17a;more » it has also been registered as AT 2017gfo) spatially and temporally coincident with GW170817. The photometric and spectroscopic evolution of DLT17ck is unique, with an absolute peak magnitude of M {sub r} = −15.8 ± 0.1 and an r -band decline rate of 1.1 mag day{sup −1}. This fast evolution is generically consistent with kilonova models, which have been predicted as the optical counterpart to binary neutron star coalescences. Analysis of archival DLT40 data does not show any sign of transient activity at the location of DLT17ck down to r ∼ 19 mag in the time period between 8 months and 21 days prior to GW170817. This discovery represents the beginning of a new era for multi-messenger astronomy, opening a new path by which to study and understand binary neutron star coalescences, short gamma-ray bursts, and their optical counterparts.« less
The Box Task: A tool to design experiments for assessing visuospatial working memory.
Kessels, Roy P C; Postma, Albert
2017-09-15
The present paper describes the Box Task, a paradigm for the computerized assessment of visuospatial working memory. In this task, hidden objects have to be searched by opening closed boxes that are shown at different locations on the computer screen. The set size (i.e., number of boxes that must be searched) can be varied and different error scores can be computed that measure specific working memory processes (i.e., the number of within-search and between-search errors). The Box Task also has a developer's mode in which new stimulus displays can be designed for use in tailored experiments. The Box Task comes with a standard set of stimulus displays (including practice trials, as well as stimulus displays with 4, 6, and 8 boxes). The raw data can be analyzed easily and the results of individual participants can be aggregated into one spreadsheet for further statistical analyses.
Characteristics of Single-Event Upsets in a Fabric Switch (ADS151)
NASA Technical Reports Server (NTRS)
Buchner, Stephen; Carts, Martin A.; McMorrow, Dale; Kim, Hak; Marshall, Paul W.; LaBel, Kenneth A.
2003-01-01
Abstract-Two types of single event effects - bit errors and single event functional interrupts - were observed during heavy-ion testing of the AD8151 crosspoint switch. Bit errors occurred in bursts with the average number of bits in a burst being dependent on both the ion LET and on the data rate. A pulsed laser was used to identify the locations on the chip where the bit errors and single event functional interrupts occurred. Bit errors originated in the switches, drivers, and output buffers. Single event functional interrupts occurred when the laser was focused on the second rank latch containing the data specifying the state of each switch in the 33x17 matrix.
Intelligent Design versus Evolution
Aviezer, Nathan
2010-01-01
Intelligent Design (ID) burst onto the scene in 1996, with the publication of Darwin’s Black Box by Michael Behe. Since then, there has been a plethora of articles written about ID, both pro and con. However, most of the articles critical of ID deal with peripheral issues, such as whether ID is just another form of creationism or whether ID qualifies as science or whether ID should be taught in public schools. It is our view that the central issue is whether the basic claim of ID is correct. Our goal is fourfold: (I) to show that most of the proposed refutations of ID are unconvincing and/or incorrect, (II) to describe the single fundamental error of ID, (III) to discuss the historic tradition surrounding the ID controversy, showing that ID is an example of a “god-of-the-gaps” argument, and (IV) to place the ID controversy in the larger context of proposed proofs for the existence of God, with the emphasis on Jewish tradition. PMID:23908779
Intelligent Design versus Evolution.
Aviezer, Nathan
2010-07-01
Intelligent Design (ID) burst onto the scene in 1996, with the publication of Darwin's Black Box by Michael Behe. Since then, there has been a plethora of articles written about ID, both pro and con. However, most of the articles critical of ID deal with peripheral issues, such as whether ID is just another form of creationism or whether ID qualifies as science or whether ID should be taught in public schools. It is our view that the central issue is whether the basic claim of ID is correct. Our goal is fourfold: (I) to show that most of the proposed refutations of ID are unconvincing and/or incorrect, (II) to describe the single fundamental error of ID, (III) to discuss the historic tradition surrounding the ID controversy, showing that ID is an example of a "god-of-the-gaps" argument, and (IV) to place the ID controversy in the larger context of proposed proofs for the existence of God, with the emphasis on Jewish tradition.
Compensated Box-Jenkins transfer function for short term load forecast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breipohl, A.; Yu, Z.; Lee, F.N.
In the past years, the Box-Jenkins ARIMA method and the Box-Jenkins transfer function method (BJTF) have been among the most commonly used methods for short term electrical load forecasting. But when there exists a sudden change in the temperature, both methods tend to exhibit larger errors in the forecast. This paper demonstrates that the load forecasting errors resulting from either the BJ ARIMA model or the BJTF model are not simply white noise, but rather well-patterned noise, and the patterns in the noise can be used to improve the forecasts. Thus a compensated Box-Jenkins transfer method (CBJTF) is proposed tomore » improve the accuracy of the load prediction. Some case studies have been made which result in about a 14-33% reduction of the root mean square (RMS) errors of the forecasts, depending on the compensation time period as well as the compensation method used.« less
ERIC Educational Resources Information Center
Huitema, Bradley E.; McKean, Joseph W.
2007-01-01
Regression models used in the analysis of interrupted time-series designs assume statistically independent errors. Four methods of evaluating this assumption are the Durbin-Watson (D-W), Huitema-McKean (H-M), Box-Pierce (B-P), and Ljung-Box (L-B) tests. These tests were compared with respect to Type I error and power under a wide variety of error…
Cells and Hypotonic Solutions.
ERIC Educational Resources Information Center
Bery, Julia
1985-01-01
Describes a demonstration designed to help students better understand the response of plant and animal cells to hypotonic solutions. The demonstration uses a balloon inside a flexible, thin-walled cardboard box. Air going in corresponds to water entering by osmosis, and, like real cells, if stretched enough, the balloon will burst. (DH)
Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I
2003-01-01
Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
NASA Technical Reports Server (NTRS)
Solomon, G.
1993-01-01
A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.
NASA Technical Reports Server (NTRS)
Hill, Eric v. K.; Walker, James L., II; Rowell, Ginger H.
1995-01-01
Acoustic emission (AE) data were taken during hydroproof for three sets of ASTM standard 5.75 inch diameter filament wound graphite/epoxy bottles. All three sets of bottles had the same design and were wound from the same graphite fiber; the only difference was in the epoxies used. Two of the epoxies had similar mechanical properties, and because the acoustic properties of materials are a function of their stiffnesses, it was thought that the AE data from the two sets might also be similar; however, this was not the case. Therefore, the three resin types were categorized using dummy variables, which allowed the prediction of burst pressures all three sets of bottles using a single neural network. Three bottles from each set were used to train the network. The resin category, the AE amplitude distribution data taken up to 25 % of the expected burst pressure, and the actual burst pressures were used as inputs. Architecturally, the network consisted of a forty-three neuron input layer (a single categorical variable defining the resin type plus forty-two continuous variables for the AE amplitude frequencies), a fifteen neuron hidden layer for mapping, and a single output neuron for burst pressure prediction. The network trained on all three bottle sets was able to predict burst pressures in the remaining bottles with a worst case error of + 6.59%, slightly greater than the desired goal of + 5%. This larger than desired error was due to poor resolution in the amplitude data for the third bottle set. When the third set of bottles was eliminated from consideration, only four hidden layer neurons were necessary to generate a worst case prediction error of - 3.43%, well within the desired goal.
Blast and Fragments from Superpressure Vessel Rupture
1976-02-09
hemispheres. These plecs were accelerated to velocities of about 300 ft/second; about half the calculated fragment velocities. MCEBION Wllltl...8217 i i i i i T T i r t i t T T—T T i i i i i i i ^ T r i i i ! i-i-1 r r" rT *TT"T- rT i ’ ’FIG. 3.1 ESTIMATED DIRECTION OF ARGON JETTING...the box (line 2) are higher than along the other two lines but no higher than was predicted for a free field burst. Pressures behind the box are
Error correcting coding-theory for structured light illumination systems
NASA Astrophysics Data System (ADS)
Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben
2017-06-01
Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.
Meteor burst communications for LPI applications
NASA Astrophysics Data System (ADS)
Schilling, D. L.; Apelewicz, T.; Lomp, G. R.; Lundberg, L. A.
A technique that enhances the performance of meteor-burst communications is described. The technique, the feedback adaptive variable rate (FAVR) system, maintains a feedback channel that allows the transmitted bit rate to mimic the time behavior of the received power so that a constant bit energy is maintained. This results in a constant probability of bit error in each transmitted bit. Experimentally determined meteor-burst channel characteristics and FAVR system simulation results are presented.
Evaluating and improving the representation of heteroscedastic errors in hydrological models
NASA Astrophysics Data System (ADS)
McInerney, D. J.; Thyer, M. A.; Kavetski, D.; Kuczera, G. A.
2013-12-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic predictions. In particular, residual errors of hydrological models are often heteroscedastic, with large errors associated with high rainfall and runoff events. Recent studies have shown that using a weighted least squares (WLS) approach - where the magnitude of residuals are assumed to be linearly proportional to the magnitude of the flow - captures some of this heteroscedasticity. In this study we explore a range of Bayesian approaches for improving the representation of heteroscedasticity in residual errors. We compare several improved formulations of the WLS approach, the well-known Box-Cox transformation and the more recent log-sinh transformation. Our results confirm that these approaches are able to stabilize the residual error variance, and that it is possible to improve the representation of heteroscedasticity compared with the linear WLS approach. We also find generally good performance of the Box-Cox and log-sinh transformations, although as indicated in earlier publications, the Box-Cox transform sometimes produces unrealistically large prediction limits. Our work explores the trade-offs between these different uncertainty characterization approaches, investigates how their performance varies across diverse catchments and models, and recommends practical approaches suitable for large-scale applications.
Nagendran, Myura; Toon, Clare D; Davidson, Brian R; Gurusamy, Kurinchi Selvan
2014-01-17
Surgical training has traditionally been one of apprenticeship, where the surgical trainee learns to perform surgery under the supervision of a trained surgeon. This is time consuming, costly, and of variable effectiveness. Training using a box model physical simulator - either a video box or a mirrored box - is an option to supplement standard training. However, the impact of this modality on trainees with no prior laparoscopic experience is unknown. To compare the benefits and harms of box model training versus no training, another box model, animal model, or cadaveric model training for surgical trainees with no prior laparoscopic experience. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index Expanded to May 2013. We included all randomised clinical trials comparing box model trainers versus no training in surgical trainees with no prior laparoscopic experience. We also included trials comparing different methods of box model training. Two authors independently identified trials and collected data. We analysed the data with both the fixed-effect and the random-effects models using Review Manager for analysis. For each outcome, we calculated the standardised mean difference (SMD) with 95% confidence intervals (CI) based on intention-to-treat analysis whenever possible. Twenty-five trials contributed data to the quantitative synthesis in this review. All but one trial were at high risk of bias. Overall, 16 trials (464 participants) provided data for meta-analysis of box training (248 participants) versus no supplementary training (216 participants). All the 16 trials in this comparison used video trainers. Overall, 14 trials (382 participants) provided data for quantitative comparison of different methods of box training. There were no trials comparing box model training versus animal model or cadaveric model training. Box model training versus no training: The meta-analysis showed that the time taken for task completion was significantly shorter in the box trainer group than the control group (8 trials; 249 participants; SMD -0.48 seconds; 95% CI -0.74 to -0.22). Compared with the control group, the box trainer group also had lower error score (3 trials; 69 participants; SMD -0.69; 95% CI -1.21 to -0.17), better accuracy score (3 trials; 73 participants; SMD 0.67; 95% CI 0.18 to 1.17), and better composite performance scores (SMD 0.65; 95% CI 0.42 to 0.88). Three trials reported movement distance but could not be meta-analysed as they were not in a format for meta-analysis. There was significantly lower movement distance in the box model training compared with no training in one trial, and there were no significant differences in the movement distance between the two groups in the other two trials. None of the remaining secondary outcomes such as mortality and morbidity were reported in the trials when animal models were used for assessment of training, error in movements, and trainee satisfaction. Different methods of box training: One trial (36 participants) found significantly shorter time taken to complete the task when box training was performed using a simple cardboard box trainer compared with the standard pelvic trainer (SMD -3.79 seconds; 95% CI -4.92 to -2.65). There was no significant difference in the time taken to complete the task in the remaining three comparisons (reverse alignment versus forward alignment box training; box trainer suturing versus box trainer drills; and single incision versus multiport box model training). There were no significant differences in the error score between the two groups in any of the comparisons (box trainer suturing versus box trainer drills; single incision versus multiport box model training; Z-maze box training versus U-maze box training). The only trial that reported accuracy score found significantly higher accuracy score with Z-maze box training than U-maze box training (1 trial; 16 participants; SMD 1.55; 95% CI 0.39 to 2.71). One trial (36 participants) found significantly higher composite score with simple cardboard box trainer compared with conventional pelvic trainer (SMD 0.87; 95% CI 0.19 to 1.56). Another trial (22 participants) found significantly higher composite score with reverse alignment compared with forward alignment box training (SMD 1.82; 95% CI 0.79 to 2.84). There were no significant differences in the composite score between the intervention and control groups in any of the remaining comparisons. None of the secondary outcomes were adequately reported in the trials. The results of this review are threatened by both risks of systematic errors (bias) and risks of random errors (play of chance). Laparoscopic box model training appears to improve technical skills compared with no training in trainees with no previous laparoscopic experience. The impacts of this decreased time on patients and healthcare funders in terms of improved outcomes or decreased costs are unknown. There appears to be no significant differences in the improvement of technical skills between different methods of box model training. Further well-designed trials of low risk of bias and random errors are necessary. Such trials should assess the impacts of box model training on surgical skills in both the short and long term, as well as clinical outcomes when the trainee becomes competent to operate on patients.
Axisymmetric Shearing Box Models of Magnetized Disks
NASA Astrophysics Data System (ADS)
Guan, Xiaoyue; Gammie, Charles F.
2008-01-01
The local model, or shearing box, has proven a useful model for studying the dynamics of astrophysical disks. Here we consider the evolution of magnetohydrodynamic (MHD) turbulence in an axisymmetric local model in order to evaluate the limitations of global axisymmetric models. An exploration of the model parameter space shows the following: (1) The magnetic energy and α-decay approximately exponentially after an initial burst of turbulence. For our code, HAM, the decay time τ propto Res , where Res/2 is the number of zones per scale height. (2) In the initial burst of turbulence the magnetic energy is amplified by a factor proportional to Res3/4λR, where λR is the radial scale of the initial field. This scaling applies only if the most unstable wavelength of the magnetorotational instability is resolved and the final field is subthermal. (3) The shearing box is a resonant cavity and in linear theory exhibits a discrete set of compressive modes. These modes are excited by the MHD turbulence and are visible as quasi-periodic oscillations (QPOs) in temporal power spectra of fluid variables at low spatial resolution. At high resolution the QPOs are hidden by a noise continuum. (4) In axisymmetry disk turbulence is local. The correlation function of the turbulence is limited in radial extent, and the peak magnetic energy density is independent of the radial extent of the box LR for LR > 2H. (5) Similar results are obtained for the HAM, ZEUS, and ATHENA codes; ATHENA has an effective resolution that is nearly double that of HAM and ZEUS. (6) Similar results are obtained for 2D and 3D runs at similar resolution, but only for particular choices of the initial field strength and radial scale of the initial magnetic field.
Failure analysis and modeling of a VAXcluster system
NASA Technical Reports Server (NTRS)
Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.
1990-01-01
This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.
2009-09-01
prior to Traditional VT pro- iv cessing. This proves to be effective and provides more robust burst detection for −3 ≤ SNR ≤ 10 dB. Performance of a...TD and WD Dimensionality . . . . . 74 4.4 Performance Sensitivity Analysis . . . . . . . . . . . . . 77 4.4.1 Effect of Burst Location Error...78 4.4.2 Effect of Dissimilar Signal SNRs . . . . . . . . . 84 4.4.3 Effect of Dissimilar Signal Types . . . . . . . . 86 V. Conclusion
Formenti, Alessandro; Zocchi, Luciano
2014-10-01
Respiratory neuromuscular activity needs to adapt to physiologic and pathologic conditions. We studied the conditioning effects of sensory fiber (putative Ia and II type from neuromuscular spindles) stimulation on the fictive respiratory output to the diaphragm, recorded from C4 phrenic ventral root, of in-vitro brainstem-spinal cord preparations from rats. The respiratory burst frequency in these preparations decreased gradually (from 0.26±0.02 to 0.09±0.003 bursts(-1)±SEM) as the age of the donor rats increased from zero to 4 days. The frequency greatly increased when the pH of the bath was lowered, and was significantly reduced by amiloride. C4 low threshold, sensory fiber stimulation, mimicking a stretched muscle, induced a short-term facilitation of the phrenic output increasing burst amplitude and frequency. When the same stimulus was applied contingently on the motor bursts, in an operant conditioning paradigm (a 500ms pulse train with a delay of 700ms from the beginning of the burst) a strong and persistent (>1h) increase in burst frequency was observed (from 0.10±0.007 to 0.20±0.018 bursts(-1)). Conversely, with random stimulation burst frequency increased only slightly and declined again within minutes to control levels after stopping stimulation. A forward model is assumed to interpret the data, and the notion of error signal, i.e. the sensory fiber activation indicating an unexpected stretched muscle, is re-considered in terms of the reward/punishment value. The signal, gaining hedonic value, is reviewed as a powerful unconditioned stimulus suitable in establishing a long-term operant conditioning-like process. Copyright © 2014 Elsevier B.V. All rights reserved.
Huynh, Chi; Wong, Ian C K; Correa-West, Jo; Terry, David; McCarthy, Suzanne
2017-04-01
Since the publication of To Err Is Human: Building a Safer Health System in 1999, there has been much research conducted into the epidemiology, nature and causes of medication errors in children, from prescribing and supply to administration. It is reassuring to see growing evidence of improving medication safety in children; however, based on media reports, it can be seen that serious and fatal medication errors still occur. This critical opinion article examines the problem of medication errors in children and provides recommendations for research, training of healthcare professionals and a culture shift towards dealing with medication errors. There are three factors that we need to consider to unravel what is missing and why fatal medication errors still occur. (1) Who is involved and affected by the medication error? (2) What factors hinder staff and organisations from learning from mistakes? Does the fear of litigation and criminal charges deter healthcare professionals from voluntarily reporting medication errors? (3) What are the educational needs required to prevent medication errors? It is important to educate future healthcare professionals about medication errors and human factors to prevent these from happening. Further research is required to apply aviation's 'black box' principles in healthcare to record and learn from near misses and errors to prevent future events. There is an urgent need for the black box investigations to be published and made public for the benefit of other organisations that may have similar potential risks for adverse events. International sharing of investigations and learning is also needed.
TNT equivalency of M10 propellant
NASA Technical Reports Server (NTRS)
Mcintyre, F. L.; Price, P.
1978-01-01
Peak, side-on blast overpressure and scaled, positive impulse have been measured for M10 single-perforated propellant, web size 0.018 inches, using configurations that simulate the handling of bulk material during processing and shipment. Quantities of 11.34, 22.7, 45.4, and 65.8 kg were tested in orthorhombic shipping containers and fiberboard boxes. High explosive equivalency values for each test series were obtained as a function of scaled distance by comparison to known pressure, arrival time and impulse characteristics for hemispherical TNT surface bursts. The equivalencies were found to depend significantly on scaled distance, with higher values of 150-100 percent (pressure) and 350-125 percent (positive impulse) for the extremes within the range from 1.19 to 3.57 m/cube root of kg. Equivalencies as low as 60-140 percent (pressure) and 30-75 percent (positive impulse) were obtained in the range of 7.14 to 15.8 m/cube root of kg. Within experimental error, both peak pressure and positive impulse scaled as a function of charge weight for all quantities tested in the orthorhombic configuration.
An improved portmanteau test for autocorrelated errors in interrupted time-series regression models.
Huitema, Bradley E; McKean, Joseph W
2007-08-01
A new portmanteau test for autocorrelation among the errors of interrupted time-series regression models is proposed. Simulation results demonstrate that the inferential properties of the proposed Q(H-M) test statistic are considerably more satisfactory than those of the well known Ljung-Box test and moderately better than those of the Box-Pierce test. These conclusions generally hold for a wide variety of autoregressive (AR), moving averages (MA), and ARMA error processes that are associated with time-series regression models of the form described in Huitema and McKean (2000a, 2000b).
The voice conveys specific emotions: evidence from vocal burst displays.
Simon-Thomas, Emiliana R; Keltner, Dacher J; Sauter, Disa; Sinicropi-Yao, Lara; Abramson, Anna
2009-12-01
Studies of emotion signaling inform claims about the taxonomic structure, evolutionary origins, and physiological correlates of emotions. Emotion vocalization research has tended to focus on a limited set of emotions: anger, disgust, fear, sadness, surprise, happiness, and for the voice, also tenderness. Here, we examine how well brief vocal bursts can communicate 22 different emotions: 9 negative (Study 1) and 13 positive (Study 2), and whether prototypical vocal bursts convey emotions more reliably than heterogeneous vocal bursts (Study 3). Results show that vocal bursts communicate emotions like anger, fear, and sadness, as well as seldom-studied states like awe, compassion, interest, and embarrassment. Ancillary analyses reveal family-wise patterns of vocal burst expression. Errors in classification were more common within emotion families (e.g., 'self-conscious,' 'pro-social') than between emotion families. The three studies reported highlight the voice as a rich modality for emotion display that can inform fundamental constructs about emotion.
A Neural Network/Acoustic Emission Analysis of Impact Damaged Graphite/Epoxy Pressure Vessels
NASA Technical Reports Server (NTRS)
Walker, James L.; Hill, Erik v. K.; Workman, Gary L.; Russell, Samuel S.
1995-01-01
Acoustic emission (AE) signal analysis has been used to measure the effects of impact damage on burst pressure in 5.75 inch diameter, inert propellant filled, filament wound pressure vessels. The AE data were collected from fifteen graphite/epoxy pressure vessels featuring five damage states and three resin systems. A burst pressure prediction model was developed by correlating the AE amplitude (frequency) distribution, generated during the first pressure ramp to 800 psig (approximately 25% of the average expected burst pressure for an undamaged vessel) to known burst pressures using a four layered back propagation neural network. The neural network, trained on three vessels from each resin system, was able to predict burst pressures with a worst case error of 5.7% for the entire fifteen bottle set.
BEATBOX v1.0: Background Error Analysis Testbed with Box Models
NASA Astrophysics Data System (ADS)
Knote, Christoph; Barré, Jérôme; Eckl, Max
2018-02-01
The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Gamma-ray bursts from cusps on superconducting cosmic strings at large redshifts
NASA Technical Reports Server (NTRS)
Paczynski, Bohdan
1988-01-01
Babul et al. (1987) proposed that some gamma-ray bursts may be caused by energy released at the cusps of oscillating loops made of superconducting cosmic strings. It is claimed that there were some errors and omissions in that work, which are claimed to be corrected in the present paper. Arguments are presented, that given certain assumptions, the cusps on oscillating superconducting cosmic strings produce highly collimated and energetic electromagnetic bursts and that a fair fraction of electromagnetic energy is likely to come out as gamma rays.
ERIC Educational Resources Information Center
Chiarini, Marc A.
2010-01-01
Traditional methods for system performance analysis have long relied on a mix of queuing theory, detailed system knowledge, intuition, and trial-and-error. These approaches often require construction of incomplete gray-box models that can be costly to build and difficult to scale or generalize. In this thesis, we present a black-box analysis…
Han, Yaoqiang; Dang, Anhong; Ren, Yongxiong; Tang, Junxiong; Guo, Hong
2010-12-20
In free space optical communication (FSOC) systems, channel fading caused by atmospheric turbulence degrades the system performance seriously. However, channel coding combined with diversity techniques can be exploited to mitigate channel fading. In this paper, based on the experimental study of the channel fading effects, we propose to use turbo product code (TPC) as the channel coding scheme, which features good resistance to burst errors and no error floor. However, only channel coding cannot cope with burst errors caused by channel fading, interleaving is also used. We investigate the efficiency of interleaving for different interleaving depths, and then the optimum interleaving depth for TPC is also determined. Finally, an experimental study of TPC with interleaving is demonstrated, and we show that TPC with interleaving can significantly mitigate channel fading in FSOC systems.
Adachi, Hiroaki; Nakano, Takaaki; Miyagawa, Noriko; Ishihama, Nobuaki; Yoshioka, Miki; Katou, Yuri; Yaeno, Takashi
2015-01-01
Pathogen attack sequentially confers pattern-triggered immunity (PTI) and effector-triggered immunity (ETI) after sensing of pathogen patterns and effectors by plant immune receptors, respectively. Reactive oxygen species (ROS) play pivotal roles in PTI and ETI as signaling molecules. Nicotiana benthamiana RBOHB, an NADPH oxidase, is responsible for both the transient PTI ROS burst and the robust ETI ROS burst. Here, we show that RBOHB transactivation mediated by MAPK contributes to R3a/AVR3a-triggered ETI (AVR3a-ETI) ROS burst. RBOHB is markedly induced during the ETI and INF1-triggered PTI (INF1-PTI), but not flg22-tiggered PTI (flg22-PTI). We found that the RBOHB promoter contains a functional W-box in the R3a/AVR3a and INF1 signal-responsive cis-element. Ectopic expression of four phospho-mimicking mutants of WRKY transcription factors, which are MAPK substrates, induced RBOHB, and yeast one-hybrid analysis indicated that these mutants bind to the cis-element. Chromatin immunoprecipitation assays indicated direct binding of the WRKY to the cis-element in plants. Silencing of multiple WRKY genes compromised the upregulation of RBOHB, resulting in impairment of AVR3a-ETI and INF1-PTI ROS bursts, but not the flg22-PTI ROS burst. These results suggest that the MAPK-WRKY pathway is required for AVR3a-ETI and INF1-PTI ROS bursts by activation of RBOHB. PMID:26373453
Quark-nova remnants. I. The leftover debris with applications to SGRs, AXPs, and XDINs
NASA Astrophysics Data System (ADS)
Ouyed, R.; Leahy, D.; Niebergal, B.
2007-10-01
We explore the formation and evolution of debris ejected around quark stars in the Quark Nova scenario, and the application to Soft Gamma-ray Repeaters (SGRs) and Anomolous X-ray Pulsars (AXPs). If an isolated neutron star explodes as a Quark Nova, an iron-rich shell of degenerate matter forms from its crust. This model can account for many of the observed features of SGRs and AXPs such as: (i) the two types of bursts (giant and regular); (ii) the spin-up and spin-down episodes during and following the bursts with associated increases in dot{P}; (iii) the energetics of the boxing day burst, SGR1806+20; (iv) the presence of an iron line as observed in SGR1900+14; (v) the correlation between the far-infrared and the X-ray fluxes during the bursting episode and the quiescent phase; (vi) the hard X-ray component observed in SGRs during the giant bursts, and (vii) the discrepancy between the ages of SGRs/AXPs and their supernova remnants. We also find a natural evolutionary relationship between SGRs and AXPs in our model which predicts that the youngest SGRs/AXPs are the most likely to exhibit strong bursting. Many features of X-ray Dim Isolated Neutron stars (XDINs) are also accounted for in our model such as, (i) the two-component blackbody spectra; (ii) the absorption lines around 300 eV; and (iii) the excess optical emission. Table 1 is only available in electronic form at http://www.aanda.org
Localised burst reconstruction from space-time PODs in a turbulent channel
NASA Astrophysics Data System (ADS)
Garcia-Gutierrez, Adrian; Jimenez, Javier
2017-11-01
The traditional proper orthogonal decomposition of the turbulent velocity fluctuations in a channel is extended to time under the assumption that the attractor is statistically stationary and can be treated as periodic for long-enough times. The objective is to extract space- and time-localised eddies that optimally represent the kinetic energy (and two-event correlation) of the flow. Using time-resolved data of a small-box simulation at Reτ = 1880 , minimal for y / h 0.25 , PODs are computed from the two-point spectral-density tensor Φ(kx ,kz , y ,y' , ω) . They are Fourier components in x, z and time, and depend on y and on the temporal frequency ω, or, equivalently, on the convection velocity c = ω /kx . Although the latter depends on y, a spatially and temporally localised `burst' can be synthesised by adding a range of PODs with specific phases. The results are localised bursts that are amplified and tilted, in a time-periodic version of Orr-like behaviour. Funded by the ERC COTURB project.
The design and analysis of single flank transmission error testor for loaded gears
NASA Technical Reports Server (NTRS)
Houser, D. R.; Bassett, D. E.
1985-01-01
Due to geometrical imperfections in gears and finite tooth stiffnesses, the motion transmitted from an input gear shaft to an output gear shaft will not have conjugate action. In order to strengthen the understanding of transmission error and to verify mathematical models of gear transmission error, a test stand that will measure the transmission error of a gear pair at operating loads, but at reduced speeds would be desirable. This document describes the design and development of a loaded transmission error tester. For a gear box with a gear ratio of one, few tooth meshing combinations will occur during a single test. In order to observe the effects of different tooth mesh combinations and to increase the ability to load test gear pairs with higher gear ratios, the system was designed around a gear box with a gear ratio of two.
Experimental determination of the elastic cotunneling rate in a hybrid single-electron box
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Chia-Heng; Tai, Po-Chen; Chen, Yung-Fu, E-mail: yfuchen@ncu.edu.tw
2014-06-09
We report measurements of charge configurations and charge transfer dynamics in a hybrid single-electron box composed of aluminum and copper. We used two single-electron transistors (SETs) to simultaneously read out different parts of the box, enabling us to map out stability diagrams of the box and identify various charge transfer processes in the box. We further characterized the elastic cotunneling in the box, which is an important source of error in electron turnstiles consisting of hybrid SETs, and found that the rate was as low as 1 Hz at degeneracy and compatible with theoretical estimates for electron tunneling via virtual statesmore » in the central superconducting island of the box.« less
Simulating X-ray bursts during a transient accretion event
NASA Astrophysics Data System (ADS)
Johnston, Zac; Heger, Alexander; Galloway, Duncan K.
2018-06-01
Modelling of thermonuclear X-ray bursts on accreting neutron stars has to date focused on stable accretion rates. However, bursts are also observed during episodes of transient accretion. During such events, the accretion rate can evolve significantly between bursts, and this regime provides a unique test for burst models. The accretion-powered millisecond pulsar SAX J1808.4-3658 exhibits accretion outbursts every 2-3 yr. During the well-sampled month-long outburst of 2002 October, four helium-rich X-ray bursts were observed. Using this event as a test case, we present the first multizone simulations of X-ray bursts under a time-dependent accretion rate. We investigate the effect of using a time-dependent accretion rate in comparison to constant, averaged rates. Initial results suggest that using a constant, average accretion rate between bursts may underestimate the recurrence time when the accretion rate is decreasing, and overestimate it when the accretion rate is increasing. Our model, with an accreted hydrogen fraction of X = 0.44 and a CNO metallicity of ZCNO = 0.02, reproduces the observed burst arrival times and fluences with root mean square (rms) errors of 2.8 h, and 0.11× 10^{-6} erg cm^{-2}, respectively. Our results support previous modelling that predicted two unobserved bursts and indicate that additional bursts were also missed by observations.
Testing and Improving the Luminosity Relations for Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Collazzi, Andrew
2011-08-01
Gamma Ray Bursts (GRBs) have several luminosity relations where a measurable property of a burst light curve or spectrum is correlated with the burst luminosity. These luminosity relations are calibrated for the fraction of bursts with spectroscopic redshifts and hence the known luminosities. GRBs have thus become known as a type of 'standard candle'; where standard candle is meant in the usual sense that their luminosities can be derived from measurable properties of the bursts. GRBs can therefore be used for the same cosmology applications as Type Ia supernovae, including the construction of the Hubble Diagram and measuring massive star formation rate. The greatest disadvantage of using GRBs as standard candles is that their accuracy is lower than desired. With the recent advent of GRBs as a new standard candle, every effort must be made to test and improve the distance measures. Here, several methods are employed to do just that. First, generalized forms of two tests are performed on all of the luminosity relations. All the luminosity relations pass the second of these tests, and all but two pass the first. Even with this failure, the redundancy in using multiple luminosity relations allows all the luminosity relations to retain value. Next, the 'Firmani relation' is shown to have poorer accuracy than first advertised. In addition, it is shown to be exactly derivable from two other luminosity relations. For these reasons, the Firmani relation is useless for cosmology. The Amati relation is then revisited and shown to be an artifact of a combination of selection effects. Therefore, the Amati relation is also not good for cosmology. Fourthly, the systematic errors involved in measuring a popular luminosity indicator (Epeak ) are measured. The result is that an irreducible systematic error of 28% exists. After that, a preliminary investigation into the usefulness of breaking GRBs into individual pulses is conducted. The results of an 'ideal' set of data do not provide for confident results due to large error bars. Finally, the work concludes with a discussion about the impact of the work and the future of GRB luminosity relations.
Performance of correlation receivers in the presence of impulse noise.
NASA Technical Reports Server (NTRS)
Moore, J. D.; Houts, R. C.
1972-01-01
An impulse noise model, which assumes that each noise burst contains a randomly weighted version of a basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. Unlike the performance results for additive white Gaussian noise, it is shown that the error performance for impulse noise is affected by the choice of signal-set basis function, and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy. Furthermore, it is demonstrated that the correlation-receiver error performance can be improved by inserting a properly specified nonlinear device prior to the receiver input.
NASA Technical Reports Server (NTRS)
Marshall, Paul; Carts, Marty; Campbell, Art; Reed, Robert; Ladbury, Ray; Seidleck, Christina; Currie, Steve; Riggs, Pam; Fritz, Karl; Randall, Barb
2004-01-01
A viewgraph presentation that reviews recent SiGe bit error test data for different commercially available high speed SiGe BiCMOS chips that were subjected to various levels of heavy ion and proton radiation. Results for the tested chips at different operating speeds are displayed in line graphs.
The Design of a Secure File Storage System
1979-12-01
ERROR _CODE (Przi SUCO COPE) !01ile not found; write access to dtrectorv not permitted I t := GATEKEFPER?.TICKFT ’MAIL BOX, 0) G ATE KF YP F I ~D iNC...BOX.MS’T.SUCC CODE F’OF COD? (DIOR SUCO CODE) Ifile_ not found.; Fead acceLss to directoryv file t ~TRKEPE.TIKFT MIT BOX C) GATHYP~PE-I.AWAIT (MAILBOX, C. (t+2
The AGILE Mission and Gamma-Ray Bursts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Longo, Francesco; INFN, section of Trieste; Tavani, M.
2007-05-01
The AGILE Mission will explore the gamma-ray Universe with a very innovative instrument combining for the first time a gamma-ray imager and a hard X-ray imager. AGILE will be operational at the beginning of 2007 and it will provide crucial data for the study of Active Galactic Nuclei, Gamma-Ray Bursts, unidentified gamma-ray sources, Galactic compact objects, supernova remnants, TeV sources, and fundamental physics by microsecond timing. The AGILE instrument is designed to simultaneously detect and image photons in the 30 MeV - 50 GeV and 15 - 45 keV energy bands with excellent imaging and timing capabilities, and a largemore » field of view covering {approx} 1/5 of the entire sky at energies above 30 MeV. A CsI calorimeter is capable of GRB triggering in the energy band 0.3-50 MeV. The broadband detection of GRBs and the study of implications for particle acceleration and high energy emission are primary goals of the mission. AGILE can image GRBs with 2-3 arcminute error boxes in the hard X-ray range, and provide broadband photon-by photon detection in the 15-45 keV, 03-50 MeV, and 30 MeV-30 GeV energy ranges. Microsecond on-board photon tagging and a {approx} 100 microsecond gamma-ray detection deadtime will be crucial for fast GRB timing. On-board calculated GRB coordinates and energy fluxes will be quickly transmitted to the ground by an ORBCOMM transceiver. AGILE is now (January 2007) undergoing final satellite integration and testing. The PLS V launch is planned in spring 2007. AGILE is then foreseen to be fully operational during the summer of 2007.« less
First-year Analysis of the Operating Room Black Box Study.
Jung, James J; Jüni, Peter; Lebovic, Gerald; Grantcharov, Teodor
2018-06-18
To characterize intraoperative errors, events, and distractions, and measure technical skills of surgeons in minimally invasive surgery practice. Adverse events in the operating room (OR) are common contributors of morbidity and mortality in surgical patients. Adverse events often occur due to deviations in performance and environmental factors. Although comprehensive intraoperative data analysis and transparent disclosure have been advocated to better understand how to improve surgical safety, they have rarely been done. We conducted a prospective cohort study in 132 consecutive patients undergoing elective laparoscopic general surgery at an academic hospital during the first year after the definite implementation of a multiport data capture system called the OR Black Box to identify intraoperative errors, events, and distractions. Expert analysts characterized intraoperative distractions, errors, and events, and measured trainee involvement as main operator. Technical skills were compared, crude and risk-adjusted, among the attending surgeon and trainees. Auditory distractions occurred a median of 138 times per case [interquartile range (IQR) 96-190]. At least 1 cognitive distraction appeared in 84 cases (64%). Medians of 20 errors (IQR 14-36) and 8 events (IQR 4-12) were identified per case. Both errors and events occurred often in dissection and reconstruction phases of operation. Technical skills of residents were lower than those of the attending surgeon (P = 0.015). During elective laparoscopic operations, frequent intraoperative errors and events, variation in surgeons' technical skills, and a high amount of environmental distractions were identified using the OR Black Box.
SSL/TLS Vulnerability Detection Using Black Box Approach
NASA Astrophysics Data System (ADS)
Gunawan, D.; Sitorus, E. H.; Rahmat, R. F.; Hizriadi, A.
2018-03-01
Socket Secure Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols that provide data encryption to secure the communication over a network. However, in some cases, there are vulnerability found in the implementation of SSL/TLS because of weak cipher key, certificate validation error or session handling error. One of the most vulnerable SSL/TLS bugs is heartbleed. As the security is essential in data communication, this research aims to build a scanner that detect the SSL/TLS vulnerability by using black box approach. This research will focus on heartbleed case. In addition, this research also gathers information about existing SSL in the server. The black box approach is used to test the output of a system without knowing the process inside the system itself. For testing purpose, this research scanned websites and found that some of the websites still have SSL/TLS vulnerability. Thus, the black box approach can be used to detect the vulnerability without considering the source code and the process inside the application.
Synthesis of Arbitrary Quantum Circuits to Topological Assembly: Systematic, Online and Compact.
Paler, Alexandru; Fowler, Austin G; Wille, Robert
2017-09-05
It is challenging to transform an arbitrary quantum circuit into a form protected by surface code quantum error correcting codes (a variant of topological quantum error correction), especially if the goal is to minimise overhead. One of the issues is the efficient placement of magic state distillation sub circuits, so-called distillation boxes, in the space-time volume that abstracts the computation's required resources. This work presents a general, systematic, online method for the synthesis of such circuits. Distillation box placement is controlled by so-called schedulers. The work introduces a greedy scheduler generating compact box placements. The implemented software, whose source code is available at www.github.com/alexandrupaler/tqec, is used to illustrate and discuss synthesis examples. Synthesis and optimisation improvements are proposed.
Testing and Improving the Luminosity Relations for Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Collazzi, Andrew C.
2012-01-01
Gamma Ray Bursts (GRBs) have several luminosity relations where a measurable property of a burst light curve or spectrum is correlated with the burst luminosity. These luminosity relations are calibrated for the fraction of bursts with spectroscopic redshifts and hence the known luminosities. GRBs have thus become known as a type of "standard candle” where standard candle is meant in the usual sense that luminosities can be derived from measurable properties of the bursts. GRBs can therefore be used for the same cosmology applications as Type Ia supernovae, including the construction of the Hubble Diagram and measuring massive star formation rate. The greatest disadvantage of using GRBs as standard candles is that their accuracy is lower than desired. With the recent advent of GRBs as a new standard candle, every effort must be made to test and improve the distance measures. Here, methods are employed to do just that. First, generalized forms of two tests are performed on the luminosity relations. All the luminosity relations pass one of these tests, and all but two pass the other. Even with this failure, redundancies in using multiple luminosity relations allows all the luminosity relations to retain value. Next, the "Firmani relation” is shown to have poorer accuracy than first advertised. It is also shown to be derivable from two other luminosity relations. For these reasons, the Firmani relation is useless for cosmology. The Amati relation is then revisited and shown to be an artifact of a combination of selection effects. Therefore, the Amati relation is also not good for cosmology. Fourthly, the systematic errors involved in measuring a luminosity indicator (Epeak) are measured. The result is an irreducible systematic error of 28%. Finally, the work concludes with a discussion about the impact of the work and the future of GRB luminosity relations.
Hahn, Philip J; McIntyre, Cameron C
2010-06-01
Deep brain stimulation (DBS) of the subthlamic nucleus (STN) represents an effective treatment for medically refractory Parkinson's disease; however, understanding of its effects on basal ganglia network activity remains limited. We constructed a computational model of the subthalamopallidal network, trained it to fit in vivo recordings from parkinsonian monkeys, and evaluated its response to STN DBS. The network model was created with synaptically connected single compartment biophysical models of STN and pallidal neurons, and stochastically defined inputs driven by cortical beta rhythms. A least mean square error training algorithm was developed to parameterize network connections and minimize error when compared to experimental spike and burst rates in the parkinsonian condition. The output of the trained network was then compared to experimental data not used in the training process. We found that reducing the influence of the cortical beta input on the model generated activity that agreed well with recordings from normal monkeys. Further, during STN DBS in the parkinsonian condition the simulations reproduced the reduction in GPi bursting found in existing experimental data. The model also provided the opportunity to greatly expand analysis of GPi bursting activity, generating three major predictions. First, its reduction was proportional to the volume of STN activated by DBS. Second, GPi bursting decreased in a stimulation frequency dependent manner, saturating at values consistent with clinically therapeutic DBS. And third, ablating STN neurons, reported to generate similar therapeutic outcomes as STN DBS, also reduced GPi bursting. Our theoretical analysis of stimulation induced network activity suggests that regularization of GPi firing is dependent on the volume of STN tissue activated and a threshold level of burst reduction may be necessary for therapeutic effect.
An analysis of the characteristics of rough bed turbulent shear stresses in an open channel
NASA Astrophysics Data System (ADS)
Keshavarzy, A.; Ball, J. E.
1997-06-01
Entrainment of sediment particles from channel beds into the channel flow is influenced by the characteristics of the flow turbulence which produces stochastic shear stress fluctuations at the bed. Recent studies of the structure of turbulent flow has recognized the importance of bursting processes as important mechanisms for the transfer of momentum into the laminar boundary layer. Of these processes, the sweep event has been recognized as the most important bursting event for entrainment of sediment particles as it imposes forces in the direction of the flow resulting in movement of particles by rolling, sliding and occasionally saltating. Similarly, the ejection event has been recognized as important for sediment transport since these events maintain the sediment particles in suspension. In this study, the characteristics of bursting processes and, in particular, the sweep event were investigated in a flume with a rough bed. The instantaneous velocity fluctuations of the flow were measured in two-dimensions using a small electromagnetic velocity meter and the turbulent shear stresses were determined from these velocity fluctuations. It was found that the shear stress applied to the sediment particles on the bed resulting from sweep events depends on the magnitude of the turbulent shear stress and its probability distribution. A statistical analysis of the experimental data was undertaken and it was found necessary to apply a Box-Cox transformation to transform the data into a normally distributed sample. This enabled determination of the mean shear stress, angle of action and standard error of estimate for sweep and ejection events. These instantaneous shear stresses were found to be greater than the mean flow shear stress and for the sweep event to be approximately 40 percent greater near the channel bed. Results from this analysis suggest that the critical shear stress determined from Shield's diagram is not sufficient to predict the initiation of motion due to its use of the temporal mean shear stress. It is suggested that initiation of particle motion, but not continuous motion, can occur earlier than suggested by Shield's diagram due to the higher shear stresses imposed on the particles by the stochastic shear stresses resulting from turbulence within the flow.
Prompt Optical Observations of Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Akerlof, Carl; Balsano, Richard; Barthelmy, Scott; Bloch, Jeff; Butterworth, Paul; Casperson, Don; Cline, Tom; Fletcher, Sandra; Frontera, Fillippo; Gisler, Galen; Heise, John; Hills, Jack; Hurley, Kevin; Kehoe, Robert; Lee, Brian; Marshall, Stuart; McKay, Tim; Pawl, Andrew; Piro, Luigi; Szymanski, John; Wren, Jim
2000-03-01
The Robotic Optical Transient Search Experiment (ROTSE) seeks to measure simultaneous and early afterglow optical emission from gamma-ray bursts (GRBs). A search for optical counterparts to six GRBs with localization errors of 1 deg2 or better produced no detections. The earliest limiting sensitivity is mROTSE>13.1 at 10.85 s (5 s exposure) after the gamma-ray rise, and the best limit is mROTSE>16.0 at 62 minutes (897 s exposure). These are the most stringent limits obtained for the GRB optical counterpart brightness in the first hour after the burst. Consideration of the gamma-ray fluence and peak flux for these bursts and for GRB 990123 indicates that there is not a strong positive correlation between optical flux and gamma-ray emission.
Loudness enhancement: Monaural, binaural and dichotic
NASA Technical Reports Server (NTRS)
Elmasian, R. O.; Galambos, R.
1975-01-01
It is shown that when one tone burst precedes another by 100 msec variations in the intensity of the first systematically influences the loudness of second. When the first burst is more intense than the second, the second is increased and when the first burst is less intense, the loudness of the second is decreased. This occurs in monaural, binaural and dichotic paradigms of signal presentation. Where both bursts are presented to the same ear there is more enhancement with less intersubject variability than when they are presented to different ears. Monaural enhancements as large as 30 db can readily be demonstrated, but decrements rarely exceed 5 db. Possible physiological mechanisms are discussed for this loudness enhancement, which apparently shares certain characteristics with time-order-error, assimilation, and temporal partial masking experiments.
A log-sinh transformation for data normalization and variance stabilization
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.
2012-05-01
When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.
An extended Reed Solomon decoder design
NASA Technical Reports Server (NTRS)
Chen, J.; Owsley, P.; Purviance, J.
1991-01-01
It has previously been shown that the Reed-Solomon (RS) codes can correct errors beyond the Singleton and Rieger Bounds with an arbitrarily small probability of a miscorrect. That is, an (n,k) RS code can correct more than (n-k)/2 errors. An implementation of such an RS decoder is presented in this paper. An existing RS decoder, the AHA4010, is utilized in this work. This decoder is especially useful for errors which are patterned with a long burst plus some random errors.
Performance evaluation of a burst-mode EDFA in an optical packet and circuit integrated network.
Shiraiwa, Masaki; Awaji, Yoshinari; Furukawa, Hideaki; Shinada, Satoshi; Puttnam, Benjamin J; Wada, Naoya
2013-12-30
We experimentally investigate the performance of burst-mode EDFA in an optical packet and circuit integrated system. In such networks, packets and light paths can be dynamically assigned to the same fibers, resulting in gain transients in EDFAs throughout the network that can limit network performance. Here, we compare the performance of a 'burst-mode' EDFA (BM-EDFA), employing transient suppression techniques and optical feedback, with conventional EDFAs, and those using automatic gain control and previous BM-EDFA implementations. We first measure gain transients and other impairments in a simplified set-up before making frame error-rate measurements in a network demonstration.
ACTS TDMA network control. [Advanced Communication Technology Satellite
NASA Technical Reports Server (NTRS)
Inukai, T.; Campanella, S. J.
1984-01-01
This paper presents basic network control concepts for the Advanced Communications Technology Satellite (ACTS) System. Two experimental systems, called the low-burst-rate and high-burst-rate systems, along with ACTS ground system features, are described. The network control issues addressed include frame structures, acquisition and synchronization procedures, coordinated station burst-time plan and satellite-time plan changes, on-board clock control based on ground drift measurements, rain fade control by means of adaptive forward-error-correction (FEC) coding and transmit power augmentation, and reassignment of channel capacities on demand. The NASA ground system, which includes a primary station, diversity station, and master control station, is also described.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
NASA Technical Reports Server (NTRS)
Berg, M.; Buchner, S.; Kim, H.; Friendlich, M.; Perez, C.; Phan, A.; Seidleck, C.; LaBel, K.; Kruckmeyer, K.
2010-01-01
A novel approach to dynamic SEE ADC testing is presented. The benefits of this test scheme versus prior implemented techniques include the ability to observe ADC SEE errors that are in the form of phase shifts, single bit upsets, bursts of disrupted signal composition, and device clock loss.
NASA Astrophysics Data System (ADS)
Leahy, Denis A.; Ouyed, R.; Niebergal, B.
2006-12-01
Mass is ejected from a quark stars formed by the Quark-Nova process (Ouyed, Dey and Dey, 2002 A&A, 390, L39; Keranen, Ouyed and Jaikumar 2005 ApJ, 681, 485). Some fraction of this ejecta is below escape velocity and falls back toward the compact object. If the magnetic field of the compact object is high enough, the fall-back material forms a shell of iron-rich material which then evolves quasi-statically. We explore the formation and evolution of such a fall-back crust (so-called because the material originates in the crust of the neutron star progenitor to the quark-nova). We find the resulting properites have application to the observed properties of Soft Gamma-ray Repeaters (SGRs) and Anomolous X-ray Pulsars (AXPs). These observed features of SGRs and AXPs are: (i) the two types of bursts (giant and regular); (ii) the spin-up and spin-down episodes during and following the bursts with associated persistant increases in period derivative ; (iii) the energetics of the boxing day burst, SGR1806+20; (iv) the presence of an Iron line as observed in SGR1900+14; (v) the correlation between the far-Infrared and the X-ray fluxes during the bursting episode and the quiescent phase; (vi) the hard X-ray component observed in SGRs during the giant bursts, and (vii) the discrepancy between the ages of SGRs/AXPs and their supernova remnants. We also find a natural evolutionary relationship between SGRs and AXPs in our model which predicts that only the youngest SGRs/AXPs are most likely to exhibit strong bursting. We acknowledge funding for this research from the Natural Science and Engineering Research Council of Canada.
Adaptive box filters for removal of random noise from digital images
Eliason, E.M.; McEwen, A.S.
1990-01-01
We have developed adaptive box-filtering algorithms to (1) remove random bit errors (pixel values with no relation to the image scene) and (2) smooth noisy data (pixels related to the image scene but with an additive or multiplicative component of noise). For both procedures, we use the standard deviation (??) of those pixels within a local box surrounding each pixel, hence they are adaptive filters. This technique effectively reduces speckle in radar images without eliminating fine details. -from Authors
Giske, Kristina; Stoiber, Eva M; Schwarz, Michael; Stoll, Armin; Muenter, Marc W; Timke, Carmen; Roeder, Falk; Debus, Juergen; Huber, Peter E; Thieke, Christian; Bendl, Rolf
2011-06-01
To evaluate the local positioning uncertainties during fractionated radiotherapy of head-and-neck cancer patients immobilized using a custom-made fixation device and discuss the effect of possible patient correction strategies for these uncertainties. A total of 45 head-and-neck patients underwent regular control computed tomography scanning using an in-room computed tomography scanner. The local and global positioning variations of all patients were evaluated by applying a rigid registration algorithm. One bounding box around the complete target volume and nine local registration boxes containing relevant anatomic structures were introduced. The resulting uncertainties for a stereotactic setup and the deformations referenced to one anatomic local registration box were determined. Local deformations of the patients immobilized using our custom-made device were compared with previously published results. Several patient positioning correction strategies were simulated, and the residual local uncertainties were calculated. The patient anatomy in the stereotactic setup showed local systematic positioning deviations of 1-4 mm. The deformations referenced to a particular anatomic local registration box were similar to the reported deformations assessed from patients immobilized with commercially available Aquaplast masks. A global correction, including the rotational error compensation, decreased the remaining local translational errors. Depending on the chosen patient positioning strategy, the remaining local uncertainties varied considerably. Local deformations in head-and-neck patients occur even if an elaborate, custom-made patient fixation method is used. A rotational error correction decreased the required margins considerably. None of the considered correction strategies achieved perfect alignment. Therefore, weighting of anatomic subregions to obtain the optimal correction vector should be investigated in the future. Copyright © 2011 Elsevier Inc. All rights reserved.
A /31,15/ Reed-Solomon Code for large memory systems
NASA Technical Reports Server (NTRS)
Lim, R. S.
1979-01-01
This paper describes the encoding and the decoding of a (31,15) Reed-Solomon Code for multiple-burst error correction for large memory systems. The decoding procedure consists of four steps: (1) syndrome calculation, (2) error-location polynomial calculation, (3) error-location numbers calculation, and (4) error values calculation. The principal features of the design are the use of a hardware shift register for both high-speed encoding and syndrome calculation, and the use of a commercially available (31,15) decoder for decoding Steps 2, 3 and 4.
Augmented twin-nonlinear two-box behavioral models for multicarrier LTE power amplifiers.
Hammi, Oualid
2014-01-01
A novel class of behavioral models is proposed for LTE-driven Doherty power amplifiers with strong memory effects. The proposed models, labeled augmented twin-nonlinear two-box models, are built by cascading a highly nonlinear memoryless function with a mildly nonlinear memory polynomial with cross terms. Experimental validation on gallium nitride based Doherty power amplifiers illustrates the accuracy enhancement and complexity reduction achieved by the proposed models. When strong memory effects are observed, the augmented twin-nonlinear two-box models can improve the normalized mean square error by up to 3 dB for the same number of coefficients when compared to state-of-the-art twin-nonlinear two-box models. Furthermore, the augmented twin-nonlinear two-box models lead to the same performance as previously reported twin-nonlinear two-box models while requiring up to 80% less coefficients.
Predator-prey modeling of the coupling of co-propagating CAE to kink modes
NASA Astrophysics Data System (ADS)
Fredrickson, Eric
2012-10-01
Co-propagating Compressional Alfven eigenmodes (CAE) with shorter wavelength and higher frequency than the counter-propagating CAE and Global Alfven eigenmodes (GAE) often accompany a low frequency n=1 kink. The lower frequency CAE and GAE are excited through a Doppler-shifted cyclotron resonance; the high frequency CAE (hfCAE) through a simple parallel resonance. We present measurements of the mode structure and spectrum of the hfCAE, and compare those measurements to predictions of a simple model for CAE. The modes are bursting with a typical burst frequency on the order of a few kHz. The n=1 kink frequency is usually higher than this, but when the kink frequency does drop towards the hfCAE burst frequency, the hfCAE burst frequency can become locked with the kink frequency. A simple predator-prey model to simulate the hfCAE bursting demonstrates that a modulation of the growth or damping rate by a few percent, at a frequency near the natural burst frequency, can lock the burst frequency to the modulation frequency. The modulation of the damping rate is postulated to be through a coupling of the kink with a symmetry-breaking error field. The deeper question is how the kink interaction with a locked mode can affect the damping/growth rates of the CAE.
Sentinel-1 TOPS interferometry for along-track displacement measurement
NASA Astrophysics Data System (ADS)
Jiang, H. J.; Pei, Y. Y.; Li, J.
2017-02-01
The European Space Agency’s Sentinel-1 mission, a constellation of two C-band synthetic aperture radar (SAR) satellites, utilizes terrain observation by progressive scan (TOPS) antenna beam steering as its default operation mode to achieve wide-swath coverage and short revisit time. The beam steering during the TOPS acquisition provides a means to measure azimuth motion by using the phase difference between forward and backward looking interferograms within regions of burst overlap. Hence, there are two spectral diversity techniques for along-track displacement measurement, including multi-aperture interferometry (MAI) and “burst overlap interferometry”. This paper analyses the measurement accuracies of MAI and burst overlap interferometry. Due to large spectral separation in the overlap region, burst overlap interferometry is a more sensitive measurement. We present a TOPS interferometry approach for along-track displacement measurement. The phase bias caused by azimuth miscoregistration is first estimated by burst overlap interferometry over stationary regions. After correcting the coregistration error, the MAI phase and the interferometric phase difference between burst overlaps are recalculated to obtain along-track displacements. We test the approach with Sentinel-1 TOPS interferometric data over the 2015 Mw 7.8 Nepal earthquake fault. The results prove the feasibility of our approach and show the potential of joint estimation of along-track displacement with burst overlap interferometry and MAI.
Reliability of anthropometric measurements in European preschool children: the ToyBox-study.
De Miguel-Etayo, P; Mesana, M I; Cardon, G; De Bourdeaudhuij, I; Góźdź, M; Socha, P; Lateva, M; Iotova, V; Koletzko, B V; Duvinage, K; Androutsos, O; Manios, Y; Moreno, L A
2014-08-01
The ToyBox-study aims to develop and test an innovative and evidence-based obesity prevention programme for preschoolers in six European countries: Belgium, Bulgaria, Germany, Greece, Poland and Spain. In multicentre studies, anthropometric measurements using standardized procedures that minimize errors in the data collection are essential to maximize reliability of measurements. The aim of this paper is to describe the standardization process and reliability (intra- and inter-observer) of height, weight and waist circumference (WC) measurements in preschoolers. All technical procedures and devices were standardized and centralized training was given to the fieldworkers. At least seven children per country participated in the intra- and inter-observer reliability testing. Intra-observer technical error ranged from 0.00 to 0.03 kg for weight and from 0.07 to 0.20 cm for height, with the overall reliability being above 99%. A second training was organized for WC due to low reliability observed in the first training. Intra-observer technical error for WC ranged from 0.12 to 0.71 cm during the first training and from 0.05 to 1.11 cm during the second training, and reliability above 92% was achieved. Epidemiological surveys need standardized procedures and training of researchers to reduce measurement error. In the ToyBox-study, very good intra- and-inter-observer agreement was achieved for all anthropometric measurements performed. © 2014 World Obesity.
The Use of Time Series Analysis and t Tests with Serially Correlated Data Tests.
ERIC Educational Resources Information Center
Nicolich, Mark J.; Weinstein, Carol S.
1981-01-01
Results of three methods of analysis applied to simulated autocorrelated data sets with an intervention point (varying in autocorrelation degree, variance of error term, and magnitude of intervention effect) are compared and presented. The three methods are: t tests; maximum likelihood Box-Jenkins (ARIMA); and Bayesian Box Jenkins. (Author/AEF)
A search for optical counterparts of gamma-ray bursts. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Hye-Sook
Gamma Ray Bursts (GRBS) are mysterious flashes of gamma rays lasting several tens to hundreds of seconds that occur approximately once per day. NASA launched the orbiting Compton Gamma Ray Observatory to study GRBs and other gamma ray phenomena. CGRO carries the Burst and Transient Experiment (BATSE) specifically to study GRBS. Although BATSE has collected data on over 600 GRBS, and confirmed that GRBs are localized, high intensity point sources of MeV gamma rays distributed isotropically in the sky, the nature and origin of GRBs remains a fundamental problem in astrophysics. BATSE`s 8 gamma ray sensors located on the comersmore » of the box shaped CGRO can detect the onset of GRBs and record their intensity and energy spectra as a function of time. The position of the burst on the sky can be determined to < {plus_minus}10{degrees} from the BATSE data stream. This position resolution is not sufficient to point a large, optical telescope at the exact position of a GRB which would determine its origin by associating it with a star. Because of their brief duration it is not known if GRBs are accompanied by visible radiation. Their seemingly large energy output suggests thatthis should be. Simply scaling the ratio of visible to gamma ray intensities of the Crab Nebula to the GRB output suggests that GRBs ought to be accompanied by visible flashes of magnitude 10 or so. A few photographs of areas containing a burst location that were coincidentally taken during the burst yield lower limits on visible output of magnitude 4. The detection of visible light during the GRB would provide information on burst physics, provide improved pointing coordinates for precise examination of the field by large telescope and provide the justification for larger dedicated optical counterpart instruments. The purpose of this experiment is to detect or set lower limits on optical counterpart radiation simultaneously accompanying the gamma rays from« less
A Multi-telescope Campaign on FRB 121102: Implications for the FRB Population
NASA Astrophysics Data System (ADS)
Law, C. J.; Abruzzo, M. W.; Bassa, C. G.; Bower, G. C.; Burke-Spolaor, S.; Butler, B. J.; Cantwell, T.; Carey, S. H.; Chatterjee, S.; Cordes, J. M.; Demorest, P.; Dowell, J.; Fender, R.; Gourdji, K.; Grainge, K.; Hessels, J. W. T.; Hickish, J.; Kaspi, V. M.; Lazio, T. J. W.; McLaughlin, M. A.; Michilli, D.; Mooley, K.; Perrott, Y. C.; Ransom, S. M.; Razavi-Ghods, N.; Rupen, M.; Scaife, A.; Scott, P.; Scholz, P.; Seymour, A.; Spitler, L. G.; Stovall, K.; Tendulkar, S. P.; Titterington, D.; Wharton, R. S.; Williams, P. K. G.
2017-11-01
We present results of the coordinated observing campaign that made the first subarcsecond localization of a fast radio burst, FRB 121102. During this campaign, we made the first simultaneous detection of an FRB burst using multiple telescopes: the VLA at 3 GHz and the Arecibo Observatory at 1.4 GHz. Of the nine bursts detected by the Very Large Array at 3 GHz, four had simultaneous observing coverage at other observatories at frequencies from 70 MHz to 15 GHz. The one multi-observatory detection and three non-detections of bursts seen at 3 GHz confirm earlier results showing that burst spectra are not well modeled by a power law. We find that burst spectra are characterized by a ∼500 MHz envelope and apparent radio energy as high as 1040 erg. We measure significant changes in the apparent dispersion between bursts that can be attributed to frequency-dependent profiles or some other intrinsic burst structure that adds a systematic error to the estimate of dispersion measure by up to 1%. We use FRB 121102 as a prototype of the FRB class to estimate a volumetric birth rate of FRB sources {R}{FRB}≈ 5× {10}-5/{N}r Mpc‑3 yr‑1, where N r is the number of bursts per source over its lifetime. This rate is broadly consistent with models of FRBs from young pulsars or magnetars born in superluminous supernovae or long gamma-ray bursts if the typical FRB repeats on the order of thousands of times during its lifetime.
MASTER OT J015539.85+485955.6 was detected during Fermi alert inspection 3.5h after the trigger time
NASA Astrophysics Data System (ADS)
Rebolo, R.; Lipunov, V.; Gorbovskoy, E.; Serra, M.; Lodieu, N.; Israelian, G.; Suarez-Andres, L.; Shumkov, V.; Tyurina, N.; Kornilov, V.; Balanutsa, P.; Kuznetsov, A.; Vlasenko, D.; Gorbunov, I.; Vladimirov, V.; Popova, E.; Buckley, D.; Potter, S.; Kniazev, A.; Kotze, M.; Tlatov, A.; Parhomenko, A. V.; Dormidontov, D.; Senik, V.; Gress, O.; Ivanov, K.; Budnev, N. M.; Yurkov, V.; Sergienko, Yu.; Gabovich, A.; Sinyakov, E.; Krushinski, V.; Zalozhnih, I.; Shurpakov, S.
2015-11-01
MASTER-IAC, MASTER-Kislovodsk and MASTER-SAAO was pointed to the FERMI GBM GRB151107B (Stanbro, Meegan, GCN #18570 ) at 2015-11-07 20:25:52(/59s/58s) UT (R.Rebolo et al., GCN #18576 ). There were the prompt pointing observations because duration of the GRB was ~140s . After 5 minutes of the alert observations of the error-box center, MASTER telescopes in IAC and Kislovodsk started the inspect survey inside large Fermi error box (ra=00 42 28 dec=+48 48 58 r=4.533300) obtained by GCN socket.
The First Swift BAT Gamma-Ray Burst Catalog
NASA Technical Reports Server (NTRS)
Sakamoto, T.; Barthelmy, S. D.; Barbier, L.; Cummings, J. R.; Fenimore, E. E.; Gehrels, N.; Hullinger, D.; Krimm, H. A.; Markwardt, C. B.; Palmer, D. M.;
2007-01-01
We present the first Swift Burst Alert Telescope (BAT) catalog of gamma ray bursts (GRBs), which contains bursts detected by the BAT between 2004 December 19 and 2007 June 16. This catalog (hereafter BAT1 catalog) contains burst trigger time, location, 90% error radius, duration, fluence, peak flux, and time averaged spectral parameters for each of 237 GRBs, as measured by the BAT. The BAT-determined position reported here is within 1.75' of the Swift X-ray Telescope (XRT)-determined position for 90% of these GRBs. The BAT T(sub 90) and T(sub 50) durations peak at 80 and 20 seconds, respectively. From the fluence-fluence correlation, we conclude that about 60% of the observed peak energies, E(sup obs)(sub peak) of BAT GRBs could be less than 100 keV. We confirm that GRB fluence to hardness and GRB peak flux to hardness are correlated for BAT bursts in analogous ways to previous missions' results. The correlation between the photon index in a simple power-law model and E(sup obs)(sub peak) is also confirmed. We also report the current status for the on-orbit BAT calibrations based on observations of the Crab Nebula.
The Second SWIFT Burst Alert Telescope (BAT) Gamma-Ray Burst Catalog
NASA Technical Reports Server (NTRS)
Sakamoto, T.; Barthelmy, S. D.; Baumgartner, W. H.; Cummings, J. R.; Fenimore, E. E.; Gehrels, N.; Krimm, H. A.; Markwardt, C. B.; Palmer, D. M.; Parsons, A. M.;
2012-01-01
We present the second Swift Burst Alert Telescope (BAT) catalog of gamma-ray bursts. (GRBs), which contains 476 bursts detected by the BAT between 2004 December 19 and 2009 December 21. This catalog (hereafter the BAT2 catalog) presents burst trigger time, location, 90% error radius, duration, fluence, peak flux, time-averaged spectral parameters and time-resolved spectral parameters measured by the BAT. In the correlation study of various observed parameters extracted from the BAT prompt emission data, we distinguish among long-duration GRBs (L-GRBs), short-duration GRBs (S-GRBs), and short-duration GRBs with extended emission (S-GRBs with E.E.) to investigate differences in the prompt emission properties. The fraction of L-GRBs, S-GRBs and S-GRBs with E.E. in the catalog are 89%, 8% and 2% respectively. We compare the BAT prompt emission properties with the BATSE, BeppoSAX and HETE-2 GRB samples.. We also correlate the observed prompt emission properties with the redshifts for the GRBs with known redshift. The BAT T(sub 90) and T(sub 50) durations peak at 70 s and 30 s, respectively. We confirm that the spectra of the BAT S-GRBs are generally harder than those of the L-GRBs.
2009-11-19
CAPE CANAVERAL, Fla. – At the Astronaut Hall of Fame near NASA’s Kennedy Space Center in Florida, Patrick Simpkins, director of Engineering at Kennedy, tries out a pair of space gloves for their dexterity and flexibility in a glove box at the 2009 Astronaut Glove Challenge, part of NASA’s Centennial Challenges Program. Looking over his shoulder is Kennedy Director Bob Cabana. The nationwide competition focused on developing improved pressure suit gloves for astronauts to use while working in space. During the challenge, the gloves were submitted to burst tests, joint force tests and tests to measure their dexterity and strength during operation in a glove box which simulates the vacuum of space. Centennial Challenges is NASA’s program of technology prizes for the citizen-inventor. The winning prize for the Glove Challenge is $250,000 provided by the Centennial Challenges Program. Photo credit: NASA/Kim Shiflett
Augmented Twin-Nonlinear Two-Box Behavioral Models for Multicarrier LTE Power Amplifiers
2014-01-01
A novel class of behavioral models is proposed for LTE-driven Doherty power amplifiers with strong memory effects. The proposed models, labeled augmented twin-nonlinear two-box models, are built by cascading a highly nonlinear memoryless function with a mildly nonlinear memory polynomial with cross terms. Experimental validation on gallium nitride based Doherty power amplifiers illustrates the accuracy enhancement and complexity reduction achieved by the proposed models. When strong memory effects are observed, the augmented twin-nonlinear two-box models can improve the normalized mean square error by up to 3 dB for the same number of coefficients when compared to state-of-the-art twin-nonlinear two-box models. Furthermore, the augmented twin-nonlinear two-box models lead to the same performance as previously reported twin-nonlinear two-box models while requiring up to 80% less coefficients. PMID:24624047
EKG-based detection of deep brain stimulation in fMRI studies.
Fiveland, Eric; Madhavan, Radhika; Prusik, Julia; Linton, Renee; Dimarzio, Marisa; Ashe, Jeffrey; Pilitsis, Julie; Hancu, Ileana
2018-04-01
To assess the impact of synchronization errors between the assumed functional MRI paradigm timing and the deep brain stimulation (DBS) on/off cycling using a custom electrocardiogram-based triggering system METHODS: A detector for measuring and predicting the on/off state of cycling deep brain stimulation was developed and tested in six patients in office visits. Three-electrode electrocardiogram measurements, amplified by a commercial bio-amplifier, were used as input for a custom electronics box (e-box). The e-box transformed the deep brain stimulation waveforms into transistor-transistor logic pulses, recorded their timing, and propagated it in time. The e-box was used to trigger task-based deep brain stimulation functional MRI scans in 5 additional subjects; the impact of timing accuracy on t-test values was investigated in a simulation study using the functional MRI data. Following locking to each patient's individual waveform, the e-box was shown to predict stimulation onset with an average absolute error of 112 ± 148 ms, 30 min after disconnecting from the patients. The subsecond accuracy of the e-box in predicting timing onset is more than adequate for our slow varying, 30-/30-s on/off stimulation paradigm. Conversely, the experimental deep brain stimulation onset prediction accuracy in the absence of the e-box, which could be off by as much as 4 to 6 s, could significantly decrease activation strength. Using this detector, stimulation can be accurately synchronized to functional MRI acquisitions, without adding any additional hardware in the MRI environment. Magn Reson Med 79:2432-2439, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
A Near-Surface Burst EMP Driver Package for Neutron-Induced Sources.
1980-09-01
FAa’S FD-F C SAwSB s3.sc 6O TO 10 C BEGIN NODIFIED QUADRATIC INTERPOLATION FOR NINlINUff 100 CONT INUE ICBIC.1 IFlIC.GTeICOUNT) 10 TO 110 YENP aISC-Sl) lSD ...M.I.T. LINCOLN LABORATORY Ro&.D ASSOCIATED P.O. DOE 369 P.O. BOX 73 P.O. 101x 9695 KttN F. A. SHAW AM LEONA WUoNLIN ATMN S- CLAY ROGERS CLEAIEILD. Ur
The Evaluation of Small Arms Effectiveness Criteria, Volume I
1975-05-01
UNCLASSIFIED AD NUMBER ADB004382 NEW LIMITATION CHANGE TO Approved for public release, distribution unlimited FROM Distribution authorized to U.S...aimedSand pointed fire E-14 E-4 Frequency distribution of sizes of M16 and BAR bursts of automatic fire E-16 SE-5 Percent of times each range bracket...defense range F-10 F-4 Weapon-signature simuilator F-15 1 F-5 Target components in armored target box F-17 F-6 Portable round counter for the M16 rifle
Two-Step Fair Scheduling of Continuous Media Streams over Error-Prone Wireless Channels
NASA Astrophysics Data System (ADS)
Oh, Soohyun; Lee, Jin Wook; Park, Taejoon; Jo, Tae-Chang
In wireless cellular networks, streaming of continuous media (with strict QoS requirements) over wireless links is challenging due to their inherent unreliability characterized by location-dependent, bursty errors. To address this challenge, we present a two-step scheduling algorithm for a base station to provide streaming of continuous media to wireless clients over the error-prone wireless links. The proposed algorithm is capable of minimizing the packet loss rate of individual clients in the presence of error bursts, by transmitting packets in the round-robin manner and also adopting a mechanism for channel prediction and swapping.
Chung, Younshik; Chang, IlJoon
2015-11-01
Recently, the introduction of vehicle black box systems or in-vehicle video event data recorders enables the driver to use the system to collect more accurate crash information such as location, time, and situation at the pre-crash and crash moment, which can be analyzed to find the crash causal factors more accurately. This study presents the vehicle black box system in brief and its application status in Korea. Based on the crash data obtained from the vehicle black box system, this study analyzes the accuracy of the crash data collected from existing road crash data recording method, which has been recorded by police officers based on accident parties' statements or eyewitness's account. The analysis results show that the crash data observed by the existing method have an average of 84.48m of spatial difference and standard deviation of 157.75m as well as average 29.05min of temporal error and standard deviation of 19.24min. Additionally, the average and standard deviation of crash speed errors were found to be 9.03km/h and 7.21km/h, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.
An Acoustic Emission and Acousto-Ultrasonic Analysis of Impact Damaged Composite Pressure Vessels
NASA Technical Reports Server (NTRS)
Walker, James L.; Workman, Gary L.; Workman, Gary L.
1996-01-01
The research presented herein summarizes the development of acoustic emission (AE) and acousto-ultrasonic (AU) techniques for the nondestructive evaluation of filament wound composite pressure vessels. Vessels fabricated from both graphite and kevlar fibers with an epoxy matrix were examined prior to hydroburst using AU and during hydroburst using AE. A dead weight drop apparatus featuring both blunt and sharp impactor tips was utilized to produce a single known energy 'damage' level in each of the vessels so that the degree to which the effects of impact damage could be measured. The damage levels ranged from barely visible to obvious fiber breakage and delamination. Independent neural network burst pressure prediction models were developed from a sample of each fiber/resin material system. Here, the cumulative AE amplitude distribution data collected from low level proof test (25% of the expected burst for undamaged vessels) were used to measure the effects of the impact on the residual burst pressure of the vessels. The results of the AE/neural network model for the inert propellant filled graphite/epoxy vessels 'IM7/3501-6, IM7/977-2 and IM7/8553-45' demonstrated that burst pressures can be predicted from low level AE proof test data, yielding an average error of 5.0%. The trained network for the IM7/977-2 class vessels was also able to predict the expected burst pressure of taller vessels (three times longer hoop region length) constructed of the same material and using the same manufacturing technique, with an average error of 4.9%. To a lesser extent, the burst pressure prediction models could also measure the effects of impact damage to the kevlar/epoxy 'Kevlar 49/ DPL862' vessels. Here though, due to the higher attenuation of the material, an insufficient amount of AE amplitude information was collected to generate robust network models. Although, the worst case trial errors were less than 6%, when additional blind predictions were attempted, errors as high as 50% were produced. An acousto-ultrasonic robotic evaluation system (AURES) was developed for mapping the effects of damage on filament wound pressure vessels prior to hydroproof testing. The AURES injects a single broadband ultrasonic pulse into each vessel at preprogrammed positions and records the effects of the interaction of that pulse on the material volume with a broadband receiver. A stress wave factor in the form of the energy associated with the 750 to 1000 kHz and 1000 to 1250 kHz frequency bands were used to map the potential failure sites for each vessel. The energy map associated with the graphite/epoxy vessels was found to decrease in the region of the impact damage. The kevlar vessels showed the opposite trend, with the energy values increasing around the damage/failure sites.
Laser velocimetry: A state-of-the-art overview
NASA Technical Reports Server (NTRS)
Stevenson, W. H.
1982-01-01
General systems design and optical and signal processing requirements for laser velocimetric measurement of flows are reviewed. Bias errors which occur in measurements using burst (counter) processors are discussed and particle seeding requirements are suggested.
Monthly ENSO Forecast Skill and Lagged Ensemble Size
DelSole, T.; Tippett, M.K.; Pegion, K.
2018-01-01
Abstract The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real‐time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real‐time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8–10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities. PMID:29937973
Monthly ENSO Forecast Skill and Lagged Ensemble Size
NASA Astrophysics Data System (ADS)
Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.
2018-04-01
The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.
A forward error correction technique using a high-speed, high-rate single chip codec
NASA Astrophysics Data System (ADS)
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
Evaluation of numerical models by FerryBox and Fixed Platform in-situ data in the southern North Sea
NASA Astrophysics Data System (ADS)
Haller, M.; Janssen, F.; Siddorn, J.; Petersen, W.; Dick, S.
2015-02-01
FerryBoxes installed on ships of opportunity (SoO) provide high-frequency surface biogeochemical measurements along selected tracks on a regular basis. Within the European FerryBox Community, several FerryBoxes are operated by different institutions. Here we present a comparison of model simulations applied to the North Sea with FerryBox temperature and salinity data from a transect along the southern North Sea and a more detailed analysis at three different positions located off the English East coast, at the Oyster Ground and in the German Bight. In addition to the FerryBox data, data from a Fixed Platform of the MARNET network are applied. Two operational hydrodynamic models have been evaluated for different time periods: results of BSHcmod v4 are analysed for 2009-2012, while simulations of FOAM AMM7 NEMO have been available from MyOcean data base for 2011 and 2012. The simulation of water temperatures is satisfying; however, limitations of the models exist, especially near the coast in the southern North Sea, where both models are underestimating salinity. Statistical errors differ between the models and the measured parameters, as the root mean square error (rmse) accounts for BSHcmod v4 to 0.92 K, for AMM7 only to 0.44 K. For salinity, BSHcmod is slightly better than AMM7 (0.98 and 1.1 psu, respectively). The study results reveal weaknesses of both models, in terms of variability, absolute levels and limited spatial resolution. In coastal areas, where the simulation of the transition zone between the coasts and the open ocean is still a demanding task for operational modelling, FerryBox data, combined with other observations with differing temporal and spatial scales serve as an invaluable tool for model evaluation and optimization. The optimization of hydrodynamical models with high frequency regional datasets, like the FerryBox data, is beneficial for their subsequent integration in ecosystem modelling.
Assimilating Ferry Box data into the Aegean Sea model
NASA Astrophysics Data System (ADS)
Korres, G.; Ntoumas, M.; Potiris, M.; Petihakis, G.
2014-12-01
Operational monitoring and forecasting of marine environmental conditions is a necessary tool for the effective management and protection of the marine ecosystem. It requires the use of multi-variable real-time measurements combined with advanced physical and ecological numerical models. Towards this, a FerryBox system was originally installed and operated in the route Piraeus-Heraklion in 2003 for one year. Early 2012 the system was upgraded and moved to a new high-speed ferry traveling daily in the same route as before. This route is by large traversing the Cretan Sea being the largest and deepest basin (2500 m) in the south Aegean Sea. The HCMR Ferry Box is today the only one in the Mediterranean and thus it can be considered as a pilot case. The analysis of FerryBox SST and SSS in situ data revealed the presence of important regional and sub-basin scale physical phenomena, such as wind-driven coastal upwelling and the presence of a mesoscale cyclone to the north of Crete. In order to assess the impact of the FerryBox SST data in constraining the Aegean Sea hydrodynamic model which is part of the POSEIDON forecasting system, the in situ data were assimilated using an advanced multivariate assimilation scheme based on the Singular Evolutive Extended Kalman (SEEK) filter, a simplified square-root extended Kalman filter that operates with low-rank error covariance matrices as a way to reduce the computational burden. Thus during the period mid-August 2012-mid January 2013 in addition to the standard assimilating parameters, daily SST data along the ferryboat route from Piraeus to Heraklion were assimilated into the model. Inter-comparisons between the control run of the system (model run that uses only the standard data set of observations) and the experiment where the observational data set is augmented with the FerryBox SST data produce interesting results. Apart from the improvement of the SST error, the additional assimilation of daily of FerryBox SST observations is found to have a significant impact on the correct representation of the dynamical dipole in the central Cretan Sea and other dynamic features of the South Aegean Sea, which is then depicted in the decrease of the basin wide SSH RMS error.
Bouda, Martin; Caplan, Joshua S.; Saiers, James E.
2016-01-01
Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073
Error protection capability of space shuttle data bus designs
NASA Technical Reports Server (NTRS)
Proch, G. E.
1974-01-01
Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.
Howell, David R; Meehan, William P; Loosemore, Michael P; Cummiskey, Joseph; Grabner von Rosenberg, Jean-Paul; McDonagh, David
2017-09-01
To prospectively examine the neurocognitive, postural, dual-task and visual abilities of female Olympic-style boxers before and after participation in a tournament. Sixty-one females completed the modified Balance Error Scoring System (mBESS), King-Devick test and 3 m timed-up-and-go test in single-task and dual-task conditions. A subset (n=31) completed the CogState computerised neurocognitive test. Initial testing was completed prior to the 2016 Women's World Boxing Championships; each participant repeated the testing protocol within a day of elimination. No participant sustained a concussion. Pretournament and post-tournament performance variables were compared using paired t-tests or Wilcoxon signed-rank tests. Participants completed a mean of 7.5±4.5 rounds of Olympic-style boxing over 2-8 days. Post-tournament scores were significantly lower than pretournament scores for total mBESS (2.2±1.9 errors vs 5.5±2.9 errors, p<0.001, d =1.23) and King-Devick time (14.2±3.9 s vs 18.0±8.3 s, p=0.002, d =0.53). Processing speed was significantly faster after the boxing tournament (maze chase task: 1.39±0.34 correct moves/second vs 1.17±0.44 correct moves/second, p=0.001, d =0.58). No significant changes across time were detected for the other obtained outcome variables. Female boxers demonstrated either improvement or no significant changes in test performance after competing in an Olympic-style boxing tournament, relative to pretournament performance. As many of the test tasks were novel for the boxers, practice effects may have contributed to improved performance. When there is a short time frame between assessments, clinicians should be aware of potential practice effects when using ringside neurological tests. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A Burst-Mode Photon-Counting Receiver with Automatic Channel Estimation and Bit Rate Detection
2016-02-24
communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode...obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver...receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB
2009-11-19
CAPE CANAVERAL, Fla. – At the Astronaut Hall of Fame near NASA’s Kennedy Space Center in Florida, Anna Heiney, a Public Affairs support writer with Abacus Technology at Kennedy, tries out a pair of space gloves for their dexterity and flexibility in a glove box at the 2009 Astronaut Glove Challenge, part of NASA’s Centennial Challenges Program. Looking over his shoulder is Kennedy Director Bob Cabana. The nationwide competition focused on developing improved pressure suit gloves for astronauts to use while working in space. During the challenge, the gloves were submitted to burst tests, joint force tests and tests to measure their dexterity and strength during operation in a glove box which simulates the vacuum of space. Centennial Challenges is NASA’s program of technology prizes for the citizen-inventor. The winning prize for the Glove Challenge is $250,000 provided by the Centennial Challenges Program. Photo credit: NASA/Kim Shiflett
Galaxy Strategy for Ligo-Virgo Gravitational Wave Counterpart Searches
NASA Technical Reports Server (NTRS)
Gehrels, Neil; Cannizzo, John K.; Kanner, Jonah; Kasliwal, Mansi M.; Nissanke, Samaya; Singer, Leo P.
2016-01-01
In this work we continue a line of inquiry begun in Kanner et al. which detailed a strategy for utilizing telescopes with narrow fields of view, such as the Swift X-Ray Telescope (XRT), to localize gravity wave (GW) triggers from LIGO (Laser Interferometer Gravitational-Wave Observatory) / Virgo. If one considers the brightest galaxies that produce 50 percent of the light, then the number of galaxies inside typical GW error boxes will be several tens. We have found that this result applies both in the early years of Advanced LIGO when the range is small and the error boxes large, and in the later years when the error boxes will be small and the range large. This strategy has the beneficial property of reducing the number of telescope pointings by a factor 10 to 100 compared with tiling the entire error box. Additional galaxy count reduction will come from a GW rapid distance estimate which will restrict the radial slice in search volume. Combining the bright galaxy strategy with a convolution based on anticipated GW localizations, we find that the searches can be restricted to about 18 plus or minus 5 galaxies for 2015, about 23 plus or minus 4 for 2017, and about 11 plus or minus for 2020. This assumes a distance localization at the putative neutron star-neutron star (NS-NS) merger range mu for each target year, and these totals are integrated out to the range. Integrating out to the horizon would roughly double the totals. For localizations with r (rotation) greatly less than mu the totals would decrease. The galaxy strategy we present in this work will enable numerous sensitive optical and X-ray telescopes with small fields of view to participate meaningfully in searches wherein the prospects for rapidly fading afterglow place a premium on a fast response time.
Technical note: Application of the Box-Cox data transformation to animal science experiments.
Peltier, M R; Wilcox, C J; Sharp, D C
1998-03-01
In the use of ANOVA for hypothesis testing in animal science experiments, the assumption of homogeneity of errors often is violated because of scale effects and the nature of the measurements. We demonstrate a method for transforming data so that the assumptions of ANOVA are met (or violated to a lesser degree) and apply it in analysis of data from a physiology experiment. Our study examined whether melatonin implantation would affect progesterone secretion in cycling pony mares. Overall treatment variances were greater in the melatonin-treated group, and several common transformation procedures failed. Application of the Box-Cox transformation algorithm reduced the heterogeneity of error and permitted the assumption of equal variance to be met.
The Angular Power Spectrum of BATSE 3B Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Tegmark, Max; Hartmann, Dieter H.; Briggs, Michael S.; Meegan, Charles A.
1996-01-01
We compute the angular power spectrum C(sub l) from the BATSE 3B catalog of 1122 gamma-ray bursts and find no evidence for clustering on any scale. These constraints bridge the entire range from small scales (which probe source clustering and burst repetition) to the largest scales (which constrain possible anisotropics from the Galactic halo or from nearby cosmological large-scale structures). We develop an analysis technique that takes the angular position errors into account. For specific clustering or repetition models, strong upper limits can be obtained down to scales l approx. equal to 30, corresponding to a couple of degrees on the sky. The minimum-variance burst weighting that we employ is visualized graphically as an all-sky map in which each burst is smeared out by an amount corresponding to its position uncertainty. We also present separate bandpass-filtered sky maps for the quadrupole term and for the multipole ranges l = 3-10 and l = 11-30, so that the fluctuations on different angular scales can be inspected separately for visual features such as localized 'hot spots' or structures aligned with the Galactic plane. These filtered maps reveal no apparent deviations from isotropy.
A comparison of earthquake backprojection imaging methods for dense local arrays
NASA Astrophysics Data System (ADS)
Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.
2018-03-01
Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we therefore recommend backprojecting kurtosis waveforms, followed by a second pass on the detected events using noise-filtered raw waveforms to achieve the best of all criteria.
Cundy, Thomas P; Thangaraj, Evelyn; Rafii-Tari, Hedyeh; Payne, Christopher J; Azzie, Georges; Sodergren, Mikael H; Yang, Guang-Zhong; Darzi, Ara
2015-04-01
Excessive or inappropriate tissue interaction force during laparoscopic surgery is a recognized contributor to surgical error, especially for robotic surgery. Measurement of force at the tool-tissue interface is, therefore, a clinically relevant skill assessment variable that may improve effectiveness of surgical simulation. Popular box trainer simulators lack the necessary technology to measure force. The aim of this study was to develop a force sensing unit that may be integrated easily with existing box trainer simulators and to (1) validate multiple force variables as objective measurements of laparoscopic skill, and (2) determine concurrent validity of a revised scoring metric. A base plate unit sensitized to a force transducer was retrofitted to a box trainer. Participants of 3 different levels of operative experience performed 5 repetitions of a peg transfer and suture task. Multiple outcome variables of force were assessed as well as a revised scoring metric that incorporated a penalty for force error. Mean, maximum, and overall magnitudes of force were significantly different among the 3 levels of experience, as well as force error. Experts were found to exert the least force and fastest task completion times, and vice versa for novices. Overall magnitude of force was the variable most correlated with experience level and task completion time. The revised scoring metric had similar predictive strength for experience level compared with the standard scoring metric. Current box trainer simulators can be adapted for enhanced objective measurements of skill involving force sensing. These outcomes are significantly influenced by level of expertise and are relevant to operative safety in laparoscopic surgery. Conventional proficiency standards that focus predominantly on task completion time may be integrated with force-based outcomes to be more accurately reflective of skill quality. Copyright © 2015 Elsevier Inc. All rights reserved.
Lee, Byoung-Doo; Kim, Mi Ri; Kang, Min-Young; Cha, Joon-Yung; Han, Su-Hyun; Nawkar, Ganesh M; Sakuraba, Yasuhito; Lee, Sang Yeol; Imaizumi, Takato; McClung, C Robertson; Kim, Woe-Yeon; Paek, Nam-Chon
2018-02-02
The previously published version of this Article contained errors in Figure 5. In panel c, the second and fourth blot images were incorrectly labeled 'α-Myc' and should have been labelled 'α-HA'. These errors have been corrected in both the PDF and HTML versions of the Article.
Neural Network Burst Pressure Prediction in Composite Overwrapped Pressure Vessels
NASA Technical Reports Server (NTRS)
Hill, Eric v. K.; Dion, Seth-Andrew T.; Karl, Justin O.; Spivey, Nicholas S.; Walker, James L., II
2007-01-01
Acoustic emission data were collected during the hydroburst testing of eleven 15 inch diameter filament wound composite overwrapped pressure vessels. A neural network burst pressure prediction was generated from the resulting AE amplitude data. The bottles shared commonality of graphite fiber, epoxy resin, and cure time. Individual bottles varied by cure mode (rotisserie versus static oven curing), types of inflicted damage, temperature of the pressurant, and pressurization scheme. Three categorical variables were selected to represent undamaged bottles, impact damaged bottles, and bottles with lacerated hoop fibers. This categorization along with the removal of the AE data from the disbonding noise between the aluminum liner and the composite overwrap allowed the prediction of burst pressures in all three sets of bottles using a single backpropagation neural network. Here the worst case error was 3.38 percent.
Evaluation of voice codecs for the Australian mobile satellite system
NASA Technical Reports Server (NTRS)
Bundrock, Tony; Wilkinson, Mal
1990-01-01
The evaluation procedure to choose a low bit rate voice coding algorithm is described for the Australian land mobile satellite system. The procedure is designed to assess both the inherent quality of the codec under 'normal' conditions and its robustness under 'severe' conditions. For the assessment, normal conditions were chosen to be random bit error rate with added background acoustic noise and the severe condition is designed to represent burst error conditions when mobile satellite channel suffers from signal fading due to roadside vegetation. The assessment is divided into two phases. First, a reduced set of conditions is used to determine a short list of candidate codecs for more extensive testing in the second phase. The first phase conditions include quality and robustness and codecs are ranked with a 60:40 weighting on the two. Second, the short listed codecs are assessed over a range of input voice levels, BERs, background noise conditions, and burst error distributions. Assessment is by subjective rating on a five level opinion scale and all results are then used to derive a weighted Mean Opinion Score using appropriate weights for each of the test conditions.
Automatic classification of background EEG activity in healthy and sick neonates
NASA Astrophysics Data System (ADS)
Löfhede, Johan; Thordstein, Magnus; Löfgren, Nils; Flisberg, Anders; Rosa-Zurera, Manuel; Kjellmer, Ingemar; Lindecrantz, Kaj
2010-02-01
The overall aim of our research is to develop methods for a monitoring system to be used at neonatal intensive care units. When monitoring a baby, a range of different types of background activity needs to be considered. In this work, we have developed a scheme for automatic classification of background EEG activity in newborn babies. EEG from six full-term babies who were displaying a burst suppression pattern while suffering from the after-effects of asphyxia during birth was included along with EEG from 20 full-term healthy newborn babies. The signals from the healthy babies were divided into four behavioural states: active awake, quiet awake, active sleep and quiet sleep. By using a number of features extracted from the EEG together with Fisher's linear discriminant classifier we have managed to achieve 100% correct classification when separating burst suppression EEG from all four healthy EEG types and 93% true positive classification when separating quiet sleep from the other types. The other three sleep stages could not be classified. When the pathological burst suppression pattern was detected, the analysis was taken one step further and the signal was segmented into burst and suppression, allowing clinically relevant parameters such as suppression length and burst suppression ratio to be calculated. The segmentation of the burst suppression EEG works well, with a probability of error around 4%.
Estimation of distributed Fermat-point location for wireless sensor networking.
Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien
2011-01-01
This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.
Superdense coding interleaved with forward error correction
Humble, Travis S.; Sadlier, Ronald J.
2016-05-12
Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less
Gamma-ray burster recurrence timescales
NASA Technical Reports Server (NTRS)
Schaefer, B. E.; Cline, T. L.
1984-01-01
Three optical transients have been found which are associated with gamma-ray bursters (GRBs). The deduced recurrence timescale for these optical transients (tau sub opt) will depend on the minimum brightness for which a flash would be detected. A detailed analysis using all available data of tau sub opt as a function of E(gamma)/E(opt) is given. For flashes similar to those found in the Harvard archives, the best estimate of tau sub opt is 0.74 years, with a 99% confidence interval from 0.23 years to 4.7 years. It is currently unclear whether the optical transients from GRBs also give rise to gamma-ray events. One way to test this association is to measure the recurrence timescale of gamma-ray events tau sub gamma. A total of 210 gamma-ray error boxes were examined and it was found that the number of observed overlaps is not significantly different from the number expected from chance coincidence. This observation can be used to place limits on tau sub gamma for an assumed luminosity function. It was found that tau sub gamma is approx. 10 yr if bursts are monoenergetic. However, if GRBs have a power law luminosity function with a wide dynamic range, then the limit is tau sub gamma 0.5 yr. Hence, the gamma-ray data do not require tau sub gamma and tau sub opt to be different.
ERIC Educational Resources Information Center
Taylor, David P.
1995-01-01
Presents an experiment that demonstrates conservation of momentum and energy using a box on the ground moving backwards as it is struck by a projectile. Discusses lab calculations, setup, management, errors, and improvements. (JRH)
Characterization of impulse noise and analysis of its effect upon correlation receivers
NASA Technical Reports Server (NTRS)
Houts, R. C.; Moore, J. D.
1971-01-01
A noise model is formulated to describe the impulse noise in many digital systems. A simplified model, which assumes that each noise burst contains a randomly weighted version of the same basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. A procedure is established for extending the results for the simplified noise model to the general model. Unlike the performance results for Gaussian noise, it is shown that for impulse noise the error performance is affected by the choice of signal-set basis functions and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy.
NASA Technical Reports Server (NTRS)
Barrie, A. C.; Smith, S. E.; Dorelli, J. C.; Gershman, D. J.; Yeh, P.; Schiff, C.; Avanov, L. A.
2017-01-01
Data compression has been a staple of imaging instruments for years. Recently, plasma measurements have utilized compression with relatively low compression ratios. The Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale (MMS) mission generates data roughly 100 times faster than previous plasma instruments, requiring a higher compression ratio to fit within the telemetry allocation. This study investigates the performance of a space-based compression standard employing a Discrete Wavelet Transform and a Bit Plane Encoder (DWT/BPE) in compressing FPI plasma count data. Data from the first 6 months of FPI operation are analyzed to explore the error modes evident in the data and how to adapt to them. While approximately half of the Dual Electron Spectrometer (DES) maps had some level of loss, it was found that there is little effect on the plasma moments and that errors present in individual sky maps are typically minor. The majority of Dual Ion Spectrometer burst sky maps compressed in a lossless fashion, with no error introduced during compression. Because of induced compression error, the size limit for DES burst images has been increased for Phase 1B. Additionally, it was found that the floating point compression mode yielded better results when images have significant compression error, leading to floating point mode being used for the fast survey mode of operation for Phase 1B. Despite the suggested tweaks, it was found that wavelet-based compression, and a DWT/BPE algorithm in particular, is highly suitable to data compression for plasma measurement instruments and can be recommended for future missions.
Improving efficiency and reducing administrative burden through electronic communication.
Cook, Katlyn E; Ludens, Gail M; Ghosh, Amit K; Mundell, William C; Fleming, Kevin C; Majka, Andrew J
2013-01-01
The InBox messaging system is an internal, electronic program used at Mayo Clinic, Rochester, MN, to facilitate the sending, receiving, and answering of patient-specific messages and alerts. A standardized InBox was implemented in the Division of General Internal Medicine to decrease the time physicians, physician assistants, and nurse practitioners (clinicians) spend on administrative tasks and to increase efficiency. Clinicians completed surveys and a preintervention InBox pilot test to determine inefficiencies related to administrative burdens and defects (message entry errors). Results were analyzed using Pareto diagrams, value stream mapping, and root cause analysis to prioritize administrative-burden inefficiencies to develop a new, standardized InBox. Clinicians and allied health staff were the target of this intervention and received standardized InBox training followed by a postintervention pilot test for clinicians. Sixteen of 28 individuals (57%) completed the preintervention survey. Twenty-eight clinicians participated in 2 separate 8-day pilot tests (before and after intervention) for the standardized InBox. The number of InBox defects was substantially reduced from 37 (Pilot 1) to 7 (Pilot 2). Frequent InBox defects decreased from 25% to 10%. More than half of clinicians believed the standardized InBox positively affected their work, and 100% of clinicians reported no negative affect on their work. This project demonstrated the successful implementation of the standardized InBox messaging system. Initial assessments show substantial reduction of InBox entry defects and administrative tasks completed by clinicians. The findings of this project suggest increased clinician and allied health staff efficiency, satisfaction, improved clinician work-life balance, and decreased clinician burden caused by administrative tasks.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-10
... their risk by addressing the situation where, under current rules, a trade can be adjusted to a price... substantial errors may fall under the category of a catastrophic error, for which a longer time period is... other exchanges Customer A has now no position and would be at risk of a loss if nullified. 3:20:00 p.m...
Manually locating physical and virtual reality objects.
Chen, Karen B; Kimmel, Ryan A; Bartholomew, Aaron; Ponto, Kevin; Gleicher, Michael L; Radwin, Robert G
2014-09-01
In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance. Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects. Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object. Users were 1.64 times less accurate (p < .001) and spent 1.49 times more time (p = .01) targeting virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p < .05) greater than the observed errors for farther virtual targets but not significantly different for close-up virtual targets. Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision. Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.
NASA Technical Reports Server (NTRS)
Luthcke, S. B.; Marshall, J. A.
1992-01-01
The TOPEX/Poseidon spacecraft was launched on August 10, 1992 to study the Earth's oceans. To achieve maximum benefit from the altimetric data it is to collect, mission requirements dictate that TOPEX/Poseidon's orbit must be computed at an unprecedented level of accuracy. To reach our pre-launch radial orbit accuracy goals, the mismodeling of the radiative nonconservative forces of solar radiation, Earth albedo an infrared re-radiation, and spacecraft thermal imbalances cannot produce in combination more than a 6 cm rms error over a 10 day period. Similarly, the 10-day drag modeling error cannot exceed 3 cm rms. In order to satisfy these requirements, a 'box-wing' representation of the satellite has been developed in which, the satellite is modelled as the combination of flat plates arranged in the shape of a box and a connected solar array. The radiative/thermal nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. Select parameters associated with the flat plates are adjusted to obtain a better representation of the satellite acceleration history. This study analyzes the estimation of these parameters from simulated TOPEX/Poseidon laser data in the presence of both nonconservative and gravity model errors. A 'best choice' of estimated parameters is derived and the ability to meet mission requirements with the 'box-wing' model evaluated.
An investigation of error correcting techniques for OMV and AXAF
NASA Technical Reports Server (NTRS)
Ingels, Frank; Fryer, John
1991-01-01
The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.
Sex differences in a human analogue of the Radial Arm Maze: the "17-Box Maze Test".
Rahman, Qazi; Abrahams, Sharon; Jussab, Fardin
2005-08-01
This study investigated sex differences in spatial memory using a human analogue of the Radial Arm Maze: a revision on the Nine Box Maze originally developed by called the 17-Box Maze Test herein. The task encourages allocentric spatial processing, dissociates object from spatial memory, and incorporates a within-participants design to provide measures of location and object, working and reference memory. Healthy adult males and females (26 per group) were administered the 17-Box Maze Test, as well as mental rotation and a verbal IQ test. Females made significantly fewer errors on this task than males. However, post hoc analysis revealed that the significant sex difference was specific to object, rather than location, memory measures. These were medium to large effect sizes. The findings raise the issue of task- and component-specific sexual dimorphism in cognitive mapping.
Investigating Reliabilities of Intraindividual Variability Indicators
ERIC Educational Resources Information Center
Wang, Lijuan; Grimm, Kevin J.
2012-01-01
Reliabilities of the two most widely used intraindividual variability indicators, "ISD[superscript 2]" and "ISD", are derived analytically. Both are functions of the sizes of the first and second moments of true intraindividual variability, the size of the measurement error variance, and the number of assessments within a burst. For comparison,…
Clevin, Lotte; Grantcharov, Teodor P
2008-01-01
Laparoscopic box model trainers have been used in training curricula for a long time, however data on their impact on skills acquisition is still limited. Our aim was to validate a low cost box model trainer as a tool for the training of skills relevant to laparoscopic surgery. Randomised, controlled trial (Canadian Task Force Classification I). University Hospital. Sixteen gynaecologic residents with limited laparoscopic experience were randomised to a group that received a structured box model training curriculum, and a control group. Performance before and after the training was assessed in a virtual reality laparoscopic trainer (LapSim and was based on objective parameters, registered by the computer system (time, error, and economy of motion scores). Group A showed significantly greater improvement in all performance parameters compared with the control group: economy of movement (p=0.001), time (p=0.001) and tissue damage (p=0.036), confirming the positive impact of box-trainer curriculum on laparoscopic skills acquisition. Structured laparoscopic skill training on a low cost box model trainer improves performance as assessed using the VR system. Trainees who used the box model trainer showed significant improvement compared to the control group. Box model trainers are valid tools for laparoscopic skills training and should be implemented in the comprehensive training curricula in gynaecology.
Central Procurement Workload Projection Model
1981-02-01
generated by the P&P Directorates such as procurement actions (PA’s) are pursued. Specifi- cally, Box-Jenkins Autoregressive Integrated Moving Average...Breakout of PA’s to over and under $10,000 23 IV. FINDINGS AND RECOMMENDATIONS 24 A. General 24 B. Findings 24 C. Recommendations 25...the model will predict the actual values and hence the error will be zero . Therefore, after forecasting 3 quarters into the future no error
Neural control of blood pressure in women: differences according to age
Peinado, Ana B.; Harvey, Ronee E.; Hart, Emma C.; Charkoudian, Nisha; Curry, Timothy B.; Nicholson, Wayne T.; Wallin, B. Gunnar; Joyner, Michael J.; Barnes, Jill N.
2017-01-01
Purpose The blood pressure “error signal” represents the difference between an individual’s mean diastolic blood pressure and the diastolic blood pressure at which 50% of cardiac cycles are associated with a muscle sympathetic nerve activity burst (the “T50”). In this study we evaluated whether T50 and the error signal related to the extent of change in blood pressure during autonomic blockade in young and older women, to study potential differences in sympathetic neural mechanisms regulating blood pressure before and after menopause. Methods We measured muscle sympathetic nerve activity and blood pressure in 12 premenopausal (25±1 years) and 12 postmenopausal women (61±2 years) before and during complete autonomic blockade with trimethaphan camsylate. Results At baseline, young women had a negative error signal (−8±1 versus 2±1 mmHg, p<0.001; respectively) and lower muscle sympathetic nerve activity (15±1 versus 33±3 bursts/min, p<0.001; respectively) than older women. The change in diastolic blood pressure after autonomic blockade was associated with baseline T50 in older women (r=−0.725, p=0.008) but not in young women (r=−0.337, p=0.29). Women with the most negative error signal had the lowest muscle sympathetic nerve activity in both groups (young: r=0.886, p<0.001; older: r=0.870, p<0.001). Conclusions Our results suggest that there are differences in baroreflex control of muscle sympathetic nerve activity between young and older women, using the T50 and error signal analysis. This approach provides further information on autonomic control of blood pressure in women. PMID:28205011
On the error statistics of Viterbi decoding and the performance of concatenated codes
NASA Technical Reports Server (NTRS)
Miller, R. L.; Deutsch, L. J.; Butman, S. A.
1981-01-01
Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.
Laparoscopic virtual reality and box trainers: is one superior to the other?
Munz, Y; Kumar, B D; Moorthy, K; Bann, S; Darzi, A
2004-03-01
Virtual reality (VR) simulators now have the potential to replace traditional methods of laparoscopic training. The aim of this study was to compare the VR simulator with the classical box trainer and determine whether one has advantages over the other. Twenty four novices were tested to determine their baseline laparoscopic skills and then randomized into the following three group: LapSim, box trainer, and no training (control). After 3 weekly training sessions lasting 30-min each, all subjects were reassessed. Assessment included motion analysis and error scores. Nonparametric tests were applied, and p < 0.05 was deemed significant. Both trained groups made significant improvements in all parameters measured ( p < 0.05). Compared to the controls, the box trainer group performed significantly better on most of the parameters, whereas the LapSim group performed significantly better on some parameters. There were no significant differences between the LapSim and box trainer groups. LapSim is effective in teaching skills that are transferable to a real laparoscopic task. However, there appear to be no substantial advantages of one system over the other.
Deflection monitoring for a box girder based on a modified conjugate beam method
NASA Astrophysics Data System (ADS)
Chen, Shi-Zhi; Wu, Gang; Xing, Tuo
2017-08-01
After several years of operation, a box girder bridge would commonly experience excessive deflection, which endangers the bridge’s life span as well as the safety of vehicles travelling on it. In order to avoid potential risks, it is essential to constantly monitor the defection of box girders. However, currently, the direct deflection monitoring methods are limited by the complicated environments beneath the bridges, such as rivers or other traffic lanes, which severely impede the layouts of the sensors. The other indirect deflection monitoring methods mostly do not thoroughly consider the inherent shear lag effect and shear deformation in the box girder, resulting in a rather large error. Under these circumstances, a deflection monitoring method suiting box girders is proposed in this article, based on the conjugate beam method and distributed long-gauge fibre Bragg grating (FBG) sensor. A lab experiment was conducted to verify the reliability and feasibility of this method under practical application. Further, the serviceability under different span-depth ratios and web thicknesses was examined through a finite element model.
High energy X-ray observations of COS-B gamma-ray sources from OSO-8
NASA Technical Reports Server (NTRS)
Dolan, J. F.; Crannell, C. J.; Dennis, B. R.; Frost, K. J.; Orwig, L. E.; Caraveo, P. A.
1985-01-01
During the three years between satellite launch in June 1975 and turn-off in October 1978, the high energy X-ray spectrometer on board OSO-8 observed nearly all of the COS-B gamma-ray source positions given in the 2CG catalog (Swanenburg et al., 1981). An X-ray source was detected at energies above 20 keV at the 6-sigma level of significance in the gamma-ray error box containing 2CG342 - 02 and at the 3-sigma level of significance in the error boxes containing 2CG065 + 00, 2CG195 + 04, and 2CG311 - 01. No definite association between the X-ray and gamma-ray sources can be made from these data alone. Upper limits are given for the 2CG sources from which no X-ray flux was detected above 20 keV.
Left arm/left leg lead reversals at the cable junction box: A cause for an epidemic of errors.
Velagapudi, Poonam; Turagam, Mohit K; Ritter, Sherry; Dohrmann, Mary L
Medical errors, especially due to misinterpretation of electrocardiograms (ECG), are extremely common in patients admitted to the hospital and significantly account for increased morbidity, mortality and health care costs in the United States. Inaccurate performance of an ECG can lead to invalid interpretation and in turn may lead to costly cardiovascular evaluation. We report a retrospective series of 58 sequential cases of ECG limb lead reversals in the ER due to inadvertent interchange in the lead cables at the point where they insert into the cable junction box of one ECG machine. This case series highlights recognition of ECG lead reversal originating in the ECG machine itself. This case series also demonstrates an ongoing need for education regarding standardization of ECG testing and for recognizing technical anomalies to deliver appropriate care for the patient. Copyright © 2016. Published by Elsevier Inc.
Baumketner, Andrij
2009-01-01
The performance of reaction-field methods to treat electrostatic interactions is tested in simulations of ions solvated in water. The potential of mean force between sodium chloride pair of ions and between side chains of lysine and aspartate are computed using umbrella sampling and molecular dynamics simulations. It is found that in comparison with lattice sum calculations, the charge-group-based approaches to reaction-field treatments produce a large error in the association energy of the ions that exhibits strong systematic dependence on the size of the simulation box. The atom-based implementation of the reaction field is seen to (i) improve the overall quality of the potential of mean force and (ii) remove the dependence on the size of the simulation box. It is suggested that the atom-based truncation be used in reaction-field simulations of mixed media. PMID:19292522
The centrifugal force reversal and X-ray bursts
NASA Astrophysics Data System (ADS)
Abramowicz, M. A.; Kluźniak, W.; Lasota, J. P.
2001-08-01
Heyl (2000) made an interesting suggestion that the observed shifts in QPO frequency in type I X-ray bursts could be influenced by the same geometrical effect of strong gravity as the one that causes centrifugal force reversal discovered by Abramowicz & Lasota (1974). However, his main result contains a sign error. Here we derive the correct formula and conclude that constraints on the M(R) relation for neutron stars deduced from the rotational-modulation model of QPO frequency shifts are of no practical interest because the correct formula implies a weak condition R* > 1.3 RS, where RS is the Schwarzschild radius. We also argue against the relevance of the rotational-modulation model to the observed frequency modulations.
NASA Astrophysics Data System (ADS)
David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera
2017-04-01
This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George
2017-03-01
Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
Peripheral refraction profiles in subjects with low foveal refractive errors.
Tabernero, Juan; Ohlendorf, Arne; Fischer, M Dominik; Bruckmann, Anna R; Schiefer, Ulrich; Schaeffel, Frank
2011-03-01
To study the variability of peripheral refraction in a population of 43 subjects with low foveal refractive errors. A scan of the refractive error in the vertical pupil meridian of the right eye of 43 subjects (age range, 18 to 80 years, foveal spherical equivalent, < ± 2.5 diopter) over the central ± 45° of the visual field was performed using a recently developed angular scanning photorefractor. Refraction profiles across the visual field were fitted with four different models: (1) "flat model" (refractions about constant across the visual field), (2) "parabolic model" (refractions follow about a parabolic function), (3) "bi-linear model" (linear change of refractions with eccentricity from the fovea to the periphery), and (4) "box model" ("flat" central area with a linear change in refraction from a certain peripheral angle). Based on the minimal residuals of each fit, the subjects were classified into one of the four models. The "box model" accurately described the peripheral refractions in about 50% of the subjects. Peripheral refractions in six subjects were better characterized by a "linear model," in eight subjects by a "flat model," and in eight by the "parabolic model." Even after assignment to one of the models, the variability remained strikingly large, ranging from -0.75 to 6 diopter in the temporal retina at 45° eccentricity. The most common peripheral refraction profile (observed in nearly 50% of our population) was best described by the "box model." The high variability among subjects may limit attempts to reduce myopia progression with a uniform lens design and may rather call for a customized approach.
Mitigating leakage errors due to cavity modes in a superconducting quantum computer
NASA Astrophysics Data System (ADS)
McConkey, T. G.; Béjanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Pagel, Z.; Rinehart, J. R.; Mariantoni, M.
2018-07-01
A practical quantum computer requires quantum bit (qubit) operations with low error probabilities in extensible architectures. We study a packaging method that makes it possible to address hundreds of superconducting qubits by means of coaxial Pogo pins. A qubit chip is housed in a superconducting box, where both box and chip dimensions lead to unwanted modes that can interfere with qubit operations. We analyze these interference effects in the context of qubit coherent leakage and qubit decoherence induced by damped modes. We propose two methods, half-wave fencing and antinode pinning, to mitigate the resulting errors by detuning the resonance frequency of the modes from the qubit frequency. We perform electromagnetic field simulations indicating that the resonance frequency of the modes increases with the number of installed pins and can be engineered to be significantly higher than the highest qubit frequency. We estimate that the error probabilities and decoherence rates due to suitably shifted modes in realistic scenarios can be up to two orders of magnitude lower than the state-of-the-art superconducting qubit error and decoherence rates. Our methods can be extended to different types of packages that do not rely on Pogo pins. Conductive bump bonds, for example, can serve the same purpose in qubit architectures based on flip chip technology. Metalized vias, instead, can be used to mitigate modes due to the increasing size of the dielectric substrate on which qubit arrays are patterned.
Potential barge transportation for inbound corn and grain
DOT National Transportation Integrated Search
1997-12-31
This research develops a model for estimating future barge and rail rates for decision making. The Box-Jenkins and the Regression Analysis with ARIMA errors forecasting methods were used to develop appropriate models for determining future rates. A s...
Papadelis, Christos; Chen, Zhe; Kourtidou-Papadeli, Chrysoula; Bamidis, Panagiotis D; Chouvarda, Ioanna; Bekiaris, Evangelos; Maglaveras, Nikos
2007-09-01
The objective of this study is the development and evaluation of efficient neurophysiological signal statistics, which may assess the driver's alertness level and serve as potential indicators of sleepiness in the design of an on-board countermeasure system. Multichannel EEG, EOG, EMG, and ECG were recorded from sleep-deprived subjects exposed to real field driving conditions. A number of severe driving errors occurred during the experiments. The analysis was performed in two main dimensions: the macroscopic analysis that estimates the on-going temporal evolution of physiological measurements during the driving task, and the microscopic event analysis that focuses on the physiological measurements' alterations just before, during, and after the driving errors. Two independent neurophysiologists visually interpreted the measurements. The EEG data were analyzed by using both linear and non-linear analysis tools. We observed the occurrence of brief paroxysmal bursts of alpha activity and an increased synchrony among EEG channels before the driving errors. The alpha relative band ratio (RBR) significantly increased, and the Cross Approximate Entropy that quantifies the synchrony among channels also significantly decreased before the driving errors. Quantitative EEG analysis revealed significant variations of RBR by driving time in the frequency bands of delta, alpha, beta, and gamma. Most of the estimated EEG statistics, such as the Shannon Entropy, Kullback-Leibler Entropy, Coherence, and Cross-Approximate Entropy, were significantly affected by driving time. We also observed an alteration of eyes blinking duration by increased driving time and a significant increase of eye blinks' number and duration before driving errors. EEG and EOG are promising neurophysiological indicators of driver sleepiness and have the potential of monitoring sleepiness in occupational settings incorporated in a sleepiness countermeasure device. The occurrence of brief paroxysmal bursts of alpha activity before severe driving errors is described in detail for the first time. Clear evidence is presented that eye-blinking statistics are sensitive to the driver's sleepiness and should be considered in the design of an efficient and driver-friendly sleepiness detection countermeasure device.
Modeling radiation forces acting on TOPEX/Poseidon for precision orbit determination
NASA Technical Reports Server (NTRS)
Marshall, J. A.; Luthcke, S. B.; Antreasian, P. G.; Rosborough, G. W.
1992-01-01
Geodetic satellites such as GEOSAT, SPOT, ERS-1, and TOPEX/Poseidon require accurate orbital computations to support the scientific data they collect. Until recently, gravity field mismodeling was the major source of error in precise orbit definition. However, albedo and infrared re-radiation, and spacecraft thermal imbalances produce in combination no more than a 6-cm radial root-mean-square (RMS) error over a 10-day period. This requires the development of nonconservative force models that take the satellite's complex geometry, attitude, and surface properties into account. For TOPEX/Poseidon, a 'box-wing' satellite form was investigated that models the satellite as a combination of flat plates arranged in a box shape with a connected solar array. The nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. In order to test the validity of this concept, 'micro-models' based on finite element analysis of TOPEX/Poseidon were used to generate acceleration histories in a wide variety of orbit orientations. These profiles are then compared to the box-wing model. The results of these simulations and their implication on the ability to precisely model the TOPEX/Poseidon orbit are discussed.
Beckwith, Jonathan G; Chu, Jeffrey J; Greenwald, Richard M
2007-08-01
Although the epidemiology and mechanics of concussion in sports have been investigated for many years, the biomechanical factors that contribute to mild traumatic brain injury remain unclear because of the difficulties in measuring impact events in the field. The purpose of this study was to validate an instrumented boxing headgear (IBH) that can be used to measure impact severity and location during play. The instrumented boxing headgear data were processed to determine linear and rotational acceleration at the head center of gravity, impact location, and impact severity metrics, such as the Head Injury Criterion (HIC) and Gadd Severity Index (GSI). The instrumented boxing headgear was fitted to a Hybrid III (HIII) head form and impacted with a weighted pendulum to characterize accuracy and repeatability. Fifty-six impacts over 3 speeds and 5 locations were used to simulate blows most commonly observed in boxing. A high correlation between the HIII and instrumented boxing headgear was established for peak linear and rotational acceleration (r2= 0.91), HIC (r2 = 0.88), and GSI (r2 = 0.89). Mean location error was 9.7 +/- 5.2 masculine. Based on this study, the IBH is a valid system for measuring head acceleration and impact location that can be integrated into training and competition.
VLSI single-chip (255,223) Reed-Solomon encoder with interleaver
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor)
1990-01-01
The invention relates to a concatenated Reed-Solomon/convolutional encoding system consisting of a Reed-Solomon outer code and a convolutional inner code for downlink telemetry in space missions, and more particularly to a Reed-Solomon encoder with programmable interleaving of the information symbols and code correction symbols to combat error bursts in the Viterbi decoder.
Spectral Domain RF Fingerprinting for 802.11 Wireless Devices
2010-03-01
induce unintentional modulation effects . If these effects (features) are sufficiently unique, it becomes possible to identify a device us- ing its...Previous AFIT research has demonstrated the effectiveness of RF Fin- gerprinting using 802.11A signals with 1) spectral correlation on Power Spectral...32 4.5. SD Intra-manufacturer Classification: Effects of Burst Location Error
Matching the oculomotor drive during head-restrained and head-unrestrained gaze shifts in monkey.
Bechara, Bernard P; Gandhi, Neeraj J
2010-08-01
High-frequency burst neurons in the pons provide the eye velocity command (equivalently, the primary oculomotor drive) to the abducens nucleus for generation of the horizontal component of both head-restrained (HR) and head-unrestrained (HU) gaze shifts. We sought to characterize how gaze and its eye-in-head component differ when an "identical" oculomotor drive is used to produce HR and HU movements. To address this objective, the activities of pontine burst neurons were recorded during horizontal HR and HU gaze shifts. The burst profile recorded on each HU trial was compared with the burst waveform of every HR trial obtained for the same neuron. The oculomotor drive was assumed to be comparable for the pair yielding the lowest root-mean-squared error. For matched pairs of HR and HU trials, the peak eye-in-head velocity was substantially smaller in the HU condition, and the reduction was usually greater than the peak head velocity of the HU trial. A time-varying attenuation index, defined as the difference in HR and HU eye velocity waveforms divided by head velocity [alpha = (H(hr) - E(hu))/H] was computed. The index was variable at the onset of the gaze shift, but it settled at values several times greater than 1. The index then decreased gradually during the movement and stabilized at 1 around the end of gaze shift. These results imply that substantial attenuation in eye velocity occurs, at least partially, downstream of the burst neurons. We speculate on the potential roles of burst-tonic neurons in the neural integrator and various cell types in the vestibular nuclei in mediating the attenuation in eye velocity in the presence of head movements.
NASA Astrophysics Data System (ADS)
Nättilä, J.; Miller, M. C.; Steiner, A. W.; Kajava, J. J. E.; Suleimanov, V. F.; Poutanen, J.
2017-12-01
Observations of thermonuclear X-ray bursts from accreting neutron stars (NSs) in low-mass X-ray binary systems can be used to constrain NS masses and radii. Most previous work of this type has set these constraints using Planck function fits as a proxy: the models and the data are both fit with diluted blackbody functions to yield normalizations and temperatures that are then compared with each other. For the first time, we here fit atmosphere models of X-ray bursting NSs directly to the observed spectra. We present a hierarchical Bayesian fitting framework that uses current X-ray bursting NS atmosphere models with realistic opacities and relativistic exact Compton scattering kernels as a model for the surface emission. We test our approach against synthetic data and find that for data that are well described by our model, we can obtain robust radius, mass, distance, and composition measurements. We then apply our technique to Rossi X-ray Timing Explorer observations of five hard-state X-ray bursts from 4U 1702-429. Our joint fit to all five bursts shows that the theoretical atmosphere models describe the data well, but there are still some unmodeled features in the spectrum corresponding to a relative error of 1-5% of the energy flux. After marginalizing over this intrinsic scatter, we find that at 68% credibility, the circumferential radius of the NS in 4U 1702-429 is R = 12.4±0.4 km, the gravitational mass is M = 1.9±0.3 M⊙, the distance is 5.1 < D/ kpc < 6.2, and the hydrogen mass fraction is X < 0.09.
DPLL implementation in carrier acquisition and tracking for burst DS-CDMA receivers.
Guan, Yun-feng; Zhang, Zhao-yang; Lai, Li-feng
2003-01-01
This paper presents the architectures, algorithms, and implementation considerations of the digital phase locked loop (DPLL) used for burst-mode packet DS-CDMA receivers. As we know, carrier offset is a rather challenging problem in CDMA system. According to different applications, different DPLL forms should be adopted to correct different maximum carrier offset in CDMA systems. One classical DPLL and two novel DPLL forms are discussed in the paper. The acquisition range of carrier offset can be widened by using the two novel DPLL forms without any performance degradation such as longer acquisition time or larger variance of the phase error. The maximum acquisition range is 1/(4T), where T is the symbol period. The design can be implemented by FPGA directly.
A Search for Neutrinos from Fast Radio Bursts with IceCube
NASA Astrophysics Data System (ADS)
Fahey, Samuel; Kheirandish, Ali; Vandenbroucke, Justin; Xu, Donglian
2017-08-01
We present a search for neutrinos in coincidence in time and direction with four fast radio bursts (FRBs) detected by the Parkes and Green Bank radio telescopes during the first year of operation of the complete IceCube Neutrino Observatory (2011 May through 2012 May). The neutrino sample consists of 138,322 muon neutrino candidate events, which are dominated by atmospheric neutrinos and atmospheric muons but also contain an astrophysical neutrino component. Considering only neutrinos detected on the same day as each FRB, zero IceCube events were found to be compatible with the FRB directions within the estimated 99% error radius of the neutrino directions. Based on the non-detection, we present the first upper limits on the neutrino fluence from FRBs.
Spectral Trends of Solar Bursts at Sub-THz Frequencies
NASA Astrophysics Data System (ADS)
Fernandes, L. O. T.; Kaufmann, P.; Correia, E.; Giménez de Castro, C. G.; Kudaka, A. S.; Marun, A.; Pereyra, P.; Raulin, J.-P.; Valio, A. B. M.
2017-01-01
Previous sub-THz studies were derived from single-event observations. We here analyze for the first time spectral trends for a larger collection of sub-THz bursts. The collection consists of a set of 16 moderate to small impulsive solar radio bursts observed at 0.2 and 0.4 THz by the Solar Submillimeter-wave Telescope (SST) in 2012 - 2014 at El Leoncito, in the Argentinean Andes. The peak burst spectra included data from new solar patrol radio telescopes (45 and 90 GHz), and were completed with microwave data obtained by the Radio Solar Telescope Network, when available. We critically evaluate errors and uncertainties in sub-THz flux estimates caused by calibration techniques and the corrections for atmospheric transmission, and introduce a new method to obtain a uniform flux scale criterion for all events. The sub-THz bursts were searched during reported GOES soft X-ray events of class C or larger, for periods common to SST observations. Seven out of 16 events exhibit spectral maxima in the range 5 - 40 GHz with fluxes decaying at sub-THz frequencies (three of them associated to GOES class X, and four to class M). Nine out of 16 events exhibited the sub-THz spectral component. In five of these events, the sub-THz emission fluxes increased with a separate frequency from that of the microwave spectral component (two classified as X and three as M), and four events have only been detected at sub-THz frequencies (three classified as M and one as C). The results suggest that the THz component might be present throughout, with the minimum turnover frequency increasing as a function of the energy of the emitting electrons. The peculiar nature of many sub-THz burst events requires further investigations of bursts that are examined from SST observations alone to better understand these phenomena.
Cánovas, Rosa; García, Rubén Fernández; Cimadevilla, Jose Manuel
2011-01-01
The aim of this study was to examine the influence of the number of cues and cue location in human spatial learning. To assess their importance, subjects performed variants of a virtual task called "The Boxes Room". Participants were trained to locate, in a computer-generated environment with 16 boxes, the rewarded boxes through 8 trials. In experiment I, the number of distal cues available was zero, one, two or the standard arrangement (seven cues). In experiment II, place navigation was compared based on distal landmarks (extra-maze cues placed on the walls) and proximal landmarks (proximal cues placed between the boxes). The results of experiment I demonstrated that one cue in the room is enough to obtain a good performance in the task. Experiment II showed that groups using proximal cues were slower and less accurate than groups using distal cues. In addition, our data suggest that men are better navigators than women, as they found the rewarded boxes sooner and committed fewer errors in both studies. These results indicate that performance can change depending on the number and location of available cues. Copyright © 2010 Elsevier B.V. All rights reserved.
Validation of Design and Analysis Techniques of Tailored Composite Structures
NASA Technical Reports Server (NTRS)
Jegley, Dawn C. (Technical Monitor); Wijayratne, Dulnath D.
2004-01-01
Aeroelasticity is the relationship between the elasticity of an aircraft structure and its aerodynamics. This relationship can cause instabilities such as flutter in a wing. Engineers have long studied aeroelasticity to ensure such instabilities do not become a problem within normal operating conditions. In recent decades structural tailoring has been used to take advantage of aeroelasticity. It is possible to tailor an aircraft structure to respond favorably to multiple different flight regimes such as takeoff, landing, cruise, 2-g pull up, etc. Structures can be designed so that these responses provide an aerodynamic advantage. This research investigates the ability to design and analyze tailored structures made from filamentary composites. Specifically the accuracy of tailored composite analysis must be verified if this design technique is to become feasible. To pursue this idea, a validation experiment has been performed on a small-scale filamentary composite wing box. The box is tailored such that its cover panels induce a global bend-twist coupling under an applied load. Two types of analysis were chosen for the experiment. The first is a closed form analysis based on a theoretical model of a single cell tailored box beam and the second is a finite element analysis. The predicted results are compared with the measured data to validate the analyses. The comparison of results show that the finite element analysis is capable of predicting displacements and strains to within 10% on the small-scale structure. The closed form code is consistently able to predict the wing box bending to 25% of the measured value. This error is expected due to simplifying assumptions in the closed form analysis. Differences between the closed form code representation and the wing box specimen caused large errors in the twist prediction. The closed form analysis prediction of twist has not been validated from this test.
A Role for the Lateral Dorsal Tegmentum in Memory and Decision Neural Circuitry
Redila, Van; Kinzel, Chantelle; Jo, Yong Sang; Puryear, Corey B.; Mizumori, Sheri J.Y.
2017-01-01
A role for the hippocampus in memory is clear, although the mechanism for its contribution remains a matter of debate. Converging evidence suggests that hippocampus evaluates the extent to which context-defining features of events occur as expected. The consequence of mismatches, or prediction error, signals from hippocampus is discussed in terms of its impact on neural circuitry that evaluates the significance of prediction errors: Ventral tegmental area (VTA) dopamine cells burst fire to rewards or cues that predict rewards (Schultz et al., 1997). Although the lateral dorsal tegmentum (LDTg) importantly controls dopamine cell burst firing (Lodge & Grace, 2006) the behavioral significance of the LDTg control is not known. Therefore, we evaluated LDTg functional activity as rats performed a spatial memory task that generates task-dependent reward codes in VTA (Jo et al., 2013; Puryear et al., 2010) and another VTA afferent, the pedunculopontine nucleus (PPTg, Norton et al., 2011). Reversible inactivation of the LDTg significantly impaired choice accuracy. LDTg neurons coded primarily egocentric information in the form of movement velocity, turning behaviors, and behaviors leading up to expected reward locations. A subset of the velocity-tuned LDTg cells also showed high frequency bursts shortly before or after reward encounters, after which they showed tonic elevated firing during consumption of small, but not large, rewards. Cells that fired before reward encounters showed stronger correlations with velocity as rats moved toward, rather than away from, rewarded sites. LDTg neural activity was more strongly regulated by egocentric behaviors than that observed for PPTg or VTA cells that were recorded by Puryear et al. and Norton et al. While PPTg activity was uniquely sensitive to ongoing sensory input, all three regions encoded reward magnitude (although in different ways), reward expectation, and reward encounters. Only VTA encoded reward prediction errors. LDTg may inform VTA about learned goal-directed movement that reflects the current motivational state, and this in turn may guide VTA determination of expected subjective goal values. When combined it is clear the LDTg and PPTg provide only a portion of the information that dopamine cells need to assess the value of prediction errors, a process that is essential to future adaptive decisions and switches of cognitive (i.e. memorial) strategies and behavioral responses. PMID:24910282
Influence of measurement error on Maxwell's demon
NASA Astrophysics Data System (ADS)
Sørdal, Vegard; Bergli, Joakim; Galperin, Y. M.
2017-06-01
In any general cycle of measurement, feedback, and erasure, the measurement will reduce the entropy of the system when information about the state is obtained, while erasure, according to Landauer's principle, is accompanied by a corresponding increase in entropy due to the compression of logical and physical phase space. The total process can in principle be fully reversible. A measurement error reduces the information obtained and the entropy decrease in the system. The erasure still gives the same increase in entropy, and the total process is irreversible. Another consequence of measurement error is that a bad feedback is applied, which further increases the entropy production if the proper protocol adapted to the expected error rate is not applied. We consider the effect of measurement error on a realistic single-electron box Szilard engine, and we find the optimal protocol for the cycle as a function of the desired power P and error ɛ .
X-ray emission from the Pleiades cluster
NASA Technical Reports Server (NTRS)
Agrawal, P. C.; Singh, K. P.; Riegler, G. R.
1983-01-01
The detection and identification of H0344+24, a new X-ray source located in the Pleiades cluster, is reported, based on observations made with HEAO A-2 low-energy detector 1 in the 0.15-3.0-keV energy band in August, 1977. The 90-percent-confidence error box for the new source is centered at 03 h 44.1 min right ascension (1950), near the center star of the 500-star Pleiades cluster, 25-eta-Tau. Since no likely galactic or extragalactic source of X-rays was found in a catalog search of the error-box region, identification of the source with the Pleiades cluster is considered secure. X-ray luminosity of the source is calculated to be about 10 to the 32nd ergs/sec, based on a distance of 125 pc. The X-ray characteristics of the Pleiades stars are discussed, and it is concluded that H0344+24 can best be explained as the integrated X-ray emission of all the B and F stars in the cluster.
NASA Technical Reports Server (NTRS)
Kaaret, P.; Piraino, S.; Halpern, Jules P.; Eracleous, M.; Oliversen, Ronald (Technical Monitor)
2001-01-01
We have discovered an X-ray source, SAX J0635+0533, with a hard spectrum within the error box of the GeV gamma-ray source in Monoceros, 2EG J0635+0521. The unabsorbed flux from the source is 1.2 x 10(exp -11) ergs /sq cm s in the 2-10 keV band. The X-ray spectrum is consistent with a simple power-law model with absorption. The photon index is 1.50 +/- 0.08, and we detect emission out to 40 keV. Optical observations identify a counterpart with a V magnitude of 12.8. The counterpart has broad emission lines and the colors of an early B-type star. If the identification of the X-ray/optical source with the gamma-ray source is correct, then the source would be a gamma-ray-emitting X-ray binary.
Ayaz, Shirazi Muhammad; Kim, Min Young
2018-01-01
In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm. Coarse registration of multiple point clouds was performed using relative orientation and translation parameters estimated via homography-based visual navigation. The proposed system was evaluated using an artificial human skull and a paper box object. For the quantitative evaluation of the accuracy of a single 3D scan, a paper box was reconstructed, and the mean errors in its height and breadth were found to be 9.4 μm and 23 μm, respectively. A comprehensive quantitative evaluation and comparison of proposed algorithm was performed with other variants of ICP. The root mean square error for the ICP algorithm to register a pair of point clouds of the skull object was also found to be less than 1 mm. PMID:29642552
Biometrics encryption combining palmprint with two-layer error correction codes
NASA Astrophysics Data System (ADS)
Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang
2017-07-01
To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.
Integrated source and channel encoded digital communication system design study
NASA Technical Reports Server (NTRS)
Huth, G. K.; Trumpis, B. D.; Udalov, S.
1975-01-01
Various aspects of space shuttle communication systems were studied. The following major areas were investigated: burst error correction for shuttle command channels; performance optimization and design considerations for Costas receivers with and without bandpass limiting; experimental techniques for measuring low level spectral components of microwave signals; and potential modulation and coding techniques for the Ku-band return link. Results are presented.
A low-latency pipeline for GRB light curve and spectrum using Fermi/GBM near real-time data
NASA Astrophysics Data System (ADS)
Zhao, Yi; Zhang, Bin-Bin; Xiong, Shao-Lin; Long, Xi; Zhang, Qiang; Song, Li-Ming; Sun, Jian-Chao; Wang, Yuan-Hao; Li, Han-Cheng; Bu, Qing-Cui; Feng, Min-Zi; Li, Zheng-Heng; Wen, Xing; Wu, Bo-Bing; Zhang, Lai-Yu; Zhang, Yong-Jie; Zhang, Shuang-Nan; Shao, Jian-Xiong
2018-05-01
Rapid response and short time latency are very important for Time Domain Astronomy, such as the observations of Gamma-ray Bursts (GRBs) and electromagnetic (EM) counterparts of gravitational waves (GWs). Based on near real-time Fermi/GBM data, we developed a low-latency pipeline to automatically calculate the temporal and spectral properties of GRBs. With this pipeline, some important parameters can be obtained, such as T 90 and fluence, within ∼ 20 min after the GRB trigger. For ∼ 90% of GRBs, T 90 and fluence are consistent with the GBM catalog results within 2σ errors. This pipeline has been used by the Gamma-ray Bursts Polarimeter (POLAR) and the Insight Hard X-ray Modulation Telescope (Insight-HXMT) to follow up the bursts of interest. For GRB 170817A, the first EM counterpart of GW events detected by Fermi/GBM and INTEGRAL/SPI-ACS, the pipeline gave T 90 and spectral information 21 min after the GBM trigger, providing important information for POLAR and Insight-HXMT observations.
Modeling Nonlinear Errors in Surface Electromyography Due To Baseline Noise: A New Methodology
Law, Laura Frey; Krishnan, Chandramouli; Avin, Keith
2010-01-01
The surface electromyographic (EMG) signal is often contaminated by some degree of baseline noise. It is customary for scientists to subtract baseline noise from the measured EMG signal prior to further analyses based on the assumption that baseline noise adds linearly to the observed EMG signal. The stochastic nature of both the baseline and EMG signal, however, may invalidate this assumption. Alternately, “true” EMG signals may be either minimally or nonlinearly affected by baseline noise. This information is particularly relevant at low contraction intensities when signal-to-noise ratios (SNR) may be lowest. Thus, the purpose of this simulation study was to investigate the influence of varying levels of baseline noise (approximately 2 – 40 % maximum EMG amplitude) on mean EMG burst amplitude and to assess the best means to account for signal noise. The simulations indicated baseline noise had minimal effects on mean EMG activity for maximum contractions, but increased nonlinearly with increasing noise levels and decreasing signal amplitudes. Thus, the simple baseline noise subtraction resulted in substantial error when estimating mean activity during low intensity EMG bursts. Conversely, correcting EMG signal as a nonlinear function of both baseline and measured signal amplitude provided highly accurate estimates of EMG amplitude. This novel nonlinear error modeling approach has potential implications for EMG signal processing, particularly when assessing co-activation of antagonist muscles or small amplitude contractions where the SNR can be low. PMID:20869716
NASA Technical Reports Server (NTRS)
Lee, Yunha; Adams, P. J.
2012-01-01
This study develops more computationally efficient versions of the TwO-Moment Aerosol Sectional (TOMAS) microphysics algorithms, collectively called Fast TOMAS. Several methods for speeding up the algorithm were attempted, but only reducing the number of size sections was adopted. Fast TOMAS models, coupled to the GISS GCM II-prime, require a new coagulation algorithm with less restrictive size resolution assumptions but only minor changes in other processes. Fast TOMAS models have been evaluated in a box model against analytical solutions of coagulation and condensation and in a 3-D model against the original TOMAS (TOMAS-30) model. Condensation and coagulation in the Fast TOMAS models agree well with the analytical solution but show slightly more bias than the TOMAS-30 box model. In the 3-D model, errors resulting from decreased size resolution in each process (i.e., emissions, cloud processing wet deposition, microphysics) are quantified in a series of model sensitivity simulations. Errors resulting from lower size resolution in condensation and coagulation, defined as the microphysics error, affect number and mass concentrations by only a few percent. The microphysics error in CN70CN100 (number concentrations of particles larger than 70100 nm diameter), proxies for cloud condensation nuclei, range from 5 to 5 in most regions. The largest errors are associated with decreasing the size resolution in the cloud processing wet deposition calculations, defined as cloud-processing error, and range from 20 to 15 in most regions for CN70CN100 concentrations. Overall, the Fast TOMAS models increase the computational speed by 2 to 3 times with only small numerical errors stemming from condensation and coagulation calculations when compared to TOMAS-30. The faster versions of the TOMAS model allow for the longer, multi-year simulations required to assess aerosol effects on cloud lifetime and precipitation.
Terrestrial Gamma Flashes at Ground Level - TETRA-II Instrumentation
NASA Astrophysics Data System (ADS)
Pleshinger, D. J.; Adams, C.; Al-Nussirat, S.; Bai, S.; Banadaki, Y.; Bitzer, P. M.; Cherry, M. L.; Hoffmann, J.; Khosravi, E.; Legault, M.; Orang, M.; Rodriguez, R.; Smith, D.; Trepanier, J. C.; Sunda-Meya, A.; Zimmer, N.
2017-12-01
The TGF and Energetic Thunderstorm Rooftop Array (TETRA-II) consists of an array of BGO scintillators to detect bursts of gamma rays from thunderstorms. TETRA-II will have approximately an order of magnitude greater sensitivity for individual flashes than TETRA-I, an original array of NaI scintillators at Louisiana State University that detected 37 millisecond-scale bursts of gamma rays from 2010-2015. The BGO scintillators increase the energy range of particles detected to 10 MeV and are placed in 20 detectors boxes, each with 1180 cm3 of BGO, at 4 separate locations: the campus of Louisiana State University in Baton Rouge, Louisiana; the campus of the University of Puerto Rico at Utuado, Puerto Rico; the Centro Nacional de Metrologia de Panama (CENAMEP) in Panama City, Panama; and the Severe Weather Institute and Radar & Lightning Laboratories in Huntsville, Alabama. The data are read out with 12 microsecond resolution by National Instruments PCIe 6351 high speed data acquisition cards, with timestamps determined from a 20 MHz clock and a GPS board recording a pulse per second. Details of the array and its instrumentation, along with an overview of initial results, will be presented.
Noise screen for attitude control system
NASA Technical Reports Server (NTRS)
Rodden, John J. (Inventor); Stevens, Homer D. (Inventor); Hong, David P. (Inventor); Hirschberg, Philip C. (Inventor)
2002-01-01
An attitude control system comprising a controller and a noise screen device coupled to the controller. The controller is adapted to control an attitude of a vehicle carrying an actuator system that is adapted to pulse in metered bursts in order to generate a control torque to control the attitude of the vehicle in response to a control pulse. The noise screen device is adapted to generate a noise screen signal in response to the control pulse that is generated when an input attitude error signal exceeds a predetermined deadband attitude level. The noise screen signal comprises a decaying offset signal that when combined with the attitude error input signal results in a net attitude error input signal away from the predetermined deadband level to reduce further control pulse generation.
Greenhalgh, T
1997-08-16
It is possible to be seriously misled by taking the statistical competence (and/or the intellectual honesty) of authors for granted. Some common errors committed (deliberately or inadvertently) by the authors of papers are given in the final box.
A software control system for the ACTS high-burst-rate link evaluation terminal
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Daugherty, Elaine S.
1991-01-01
Control and performance monitoring of NASA's High Burst Rate Link Evaluation Terminal (HBR-LET) is accomplished by using several software control modules. Different software modules are responsible for controlling remote radio frequency (RF) instrumentation, supporting communication between a host and a remote computer, controlling the output power of the Link Evaluation Terminal and data display. Remote commanding of microwave RF instrumentation and the LET digital ground terminal allows computer control of various experiments, including bit error rate measurements. Computer communication allows system operators to transmit and receive from the Advanced Communications Technology Satellite (ACTS). Finally, the output power control software dynamically controls the uplink output power of the terminal to compensate for signal loss due to rain fade. Included is a discussion of each software module and its applications.
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petiteau, Antoine; Babak, Stanislav; Sesana, Alberto
Gravitational wave (GW) signals from coalescing massive black hole (MBH) binaries could be used as standard sirens to measure cosmological parameters. The future space-based GW observatory Laser Interferometer Space Antenna (LISA) will detect up to a hundred of those events, providing very accurate measurements of their luminosity distances. To constrain the cosmological parameters, we also need to measure the redshift of the galaxy (or cluster of galaxies) hosting the merger. This requires the identification of a distinctive electromagnetic event associated with the binary coalescence. However, putative electromagnetic signatures may be too weak to be observed. Instead, we study here themore » possibility of constraining the cosmological parameters by enforcing statistical consistency between all the possible hosts detected within the measurement error box of a few dozen of low-redshift (z < 3) events. We construct MBH populations using merger tree realizations of the dark matter hierarchy in a {Lambda}CDM universe, and we use data from the Millennium simulation to model the galaxy distribution in the LISA error box. We show that, assuming that all the other cosmological parameters are known, the parameter w describing the dark energy equation of state can be constrained to a 4%-8% level (2{sigma} error), competitive with current uncertainties obtained by type Ia supernovae measurements, providing an independent test of our cosmological model.« less
Brinker, Titus Josef; Rudolph, Stefanie; Richter, Daniela; von Kalle, Christof
2018-05-11
This article describes the DataBox project which offers a perspective of a new health data management solution in Germany. DataBox was initially conceptualized as a repository of individual lung cancer patient data (structured and unstructured). The patient is the owner of the data and is able to share his or her data with different stakeholders. Data is transferred, displayed, and stored online, but not archived. In the long run, the project aims at replacing the conventional method of paper- and storage-device-based handling of data for all patients in Germany, leading to better organization and availability of data which reduces duplicate diagnostic procedures, treatment errors, and enables the training as well as usage of artificial intelligence algorithms on large datasets. ©Titus Josef Brinker, Stefanie Rudolph, Daniela Richter, Christof von Kalle. Originally published in JMIR Cancer (http://cancer.jmir.org), 11.05.2018.
Design of a digital voice data compression technique for orbiter voice channels
NASA Technical Reports Server (NTRS)
1975-01-01
Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters.
Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas
NASA Technical Reports Server (NTRS)
Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.
1990-01-01
The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.
The timing of bud burst and its effect on tree growth.
Rötzer, T; Grote, R; Pretzsch, H
2004-02-01
A phenology model for estimating the timings of bud burst--one of the most influential phenological phases for the simulation of tree growth--is presented in this study. The model calculates the timings of the leafing of beech (Fagus sylvatica L.) and oak (Quercus robur L.) and the May shoot of Norway spruce (Picea abies L.) and Scots pine (Pinus sylvestris L.) on the basis of the daily maximum temperature. The data for parameterisation and validation of the model have been taken from 40 climate and 120 phenological stations in southern Germany with time series for temperature and bud burst of up to 30 years. The validation of the phenology module by means of an independent data set showed correlation coefficients for comparisons between observed and simulated values of 54% (beech), 55% (oak), 59% (spruce) and 56% (pine) with mean absolute errors varying from 4.4 days (spruce) to 5.0 days (pine). These results correspond well with the results of other--often more complex--phenology models. After the phenology module had been implemented in the tree-growth model BALANCE, the growth of a mixed forest stand with the former static and the new dynamic timings for the bud burst was simulated. The results of the two simulation runs showed that phenology has to be taken into account when simulating forest growth, particularly in mixed stands.
NASA Astrophysics Data System (ADS)
Tan, Ying; Dai, Daoxin
2018-05-01
Silicon microring resonators (MRRs) are very popular for many applications because of the advantages of footprint compactness, easy scalability, and functional versatility. Ultra-compact silicon MRRs with box-like spectral responses are realized with a very large free-spectral range (FSR) by introducing bent directional couplers. The measured box-like spectral response has an FSR of >30 nm. The permanent wavelength-alignment techniques for MRRs are also presented, including the laser-induced local-oxidation technique as well as the local-etching technique. With these techniques, one can control finely the permanent wavelength shift, which is also large enough to compensate the random wavelength variation due to the random fabrication errors.
Greenhalgh, T.
1997-01-01
It is possible to be seriously misled by taking the statistical competence (and/or the intellectual honesty) of authors for granted. Some common errors committed (deliberately or inadvertently) by the authors of papers are given in the final box. PMID:9277611
Application of Uniform Measurement Error Distribution
2016-03-18
subrata.sanyal@navy.mil Point of Contact: Measurement Science & Engineering Department Operations (Code: MS02) P.O. Box 5000 Corona , CA 92878... Corona , California 92878-5000 March 18, 2016 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited...NSWC Corona Public Release Control Number 16-005) NSWCCORDIV/RDTR-2016-005 iii
Candidate counterparts to the soft gamma-ray flare in the direction of LS I +61 303
NASA Astrophysics Data System (ADS)
Muñoz-Arjonilla, A. J.; Martí, J.; Combi, J. A.; Luque-Escamilla, P.; Sánchez-Sutil, J. R.; Zabalza, V.; Paredes, J. M.
2009-04-01
Context: A short duration burst reminiscent of a soft gamma-ray repeater/anomalous X-ray pulsar behaviour was detected in the direction of LS I +61 303 by the Swift satellite. While the association with this well known gamma-ray binary is likely, a different origin cannot be excluded. Aims: We explore the error box of this unexpected flaring event and establish the radio, near-infrared and X-ray sources in our search for any peculiar alternative counterpart. Methods: We carried out a combined analysis of archive Very Large Array radio data of LS I +61 303 sensitive to both compact and extended emission. We also reanalysed previous near infrared observations with the 3.5 m telescope of the Centro Astronómico Hispano Alemán and X-ray observations with the Chandra satellite. Results: Our deep radio maps of the LS I +61 303 environment represent a significant advancement on previous work and 16 compact radio sources in the LS I +61 303 vicinity are detected. For some detections, we also identify near infrared and X-ray counterparts. Extended emission features in the field are also detected and confirmed. The possible connection of some of these sources with the observed flaring event is considered. Based on these data, we are unable to claim a clear association between the Swift-BAT flare and any of the sources reported here. However, this study represents the most sophisticated attempt to determine possible alternative counterparts other than LS I +61 303.
Parameter Estimation for Gravitational-wave Bursts with the BayesWave Pipeline
NASA Technical Reports Server (NTRS)
Becsy, Bence; Raffai, Peter; Cornish, Neil; Essick, Reed; Kanner, Jonah; Katsavounidis, Erik; Littenberg, Tyson B.; Millhouse, Margaret; Vitale, Salvatore
2017-01-01
We provide a comprehensive multi-aspect study of the performance of a pipeline used by the LIGO-Virgo Collaboration for estimating parameters of gravitational-wave bursts. We add simulated signals with four different morphologies (sine-Gaussians (SGs), Gaussians, white-noise bursts, and binary black hole signals) to simulated noise samples representing noise of the two Advanced LIGO detectors during their first observing run. We recover them with the BayesWave (BW) pipeline to study its accuracy in sky localization, waveform reconstruction, and estimation of model-independent waveform parameters. BW localizes sources with a level of accuracy comparable for all four morphologies, with the median separation of actual and estimated sky locations ranging from 25.1deg to30.3deg. This is a reasonable accuracy in the two-detector case, and is comparable to accuracies of other localization methods studied previously. As BW reconstructs generic transient signals with SG wavelets, it is unsurprising that BW performs best in reconstructing SG and Gaussian waveforms. The BW accuracy in waveform reconstruction increases steeply with the network signal-to-noise ratio (S/N(sub net), reaching a 85% and 95% match between the reconstructed and actual waveform below S/N(sub net) approx. = 20 and S/N(sub net) approx. = 50, respectively, for all morphologies. The BW accuracy in estimating central moments of waveforms is only limited by statistical errors in the frequency domain, and is also affected by systematic errors in the time domain as BW cannot reconstruct low-amplitude parts of signals that are overwhelmed by noise. The figures of merit we introduce can be used in future characterizations of parameter estimation pipelines.
NASA Technical Reports Server (NTRS)
Younes, G.; Kouveliotou, C.; Grefenstette, B. W.; Tomsick, J. A.; Tennant, A.; Finger, M. H.; Furst, F.; Pottschmidt, K.; Bhalerao, V.; Boggs, S. E.;
2015-01-01
We report on a 10 ks simultaneous Chandra/High Energy Transmission Grating (HETG)-Nuclear Spectroscopic Telescope Array (NuSTAR) observation of the Bursting Pulsar, GRO J1744-28, during its third detected outburst since discovery and after nearly 18 yr of quiescence. The source is detected up to 60 keV with an Eddington persistent flux level. Seven bursts, followed by dips, are seen with Chandra, three of which are also detected with NuSTAR. Timing analysis reveals a slight increase in the persistent emission pulsed fraction with energy (from 10% to 15%) up to 10 keV, above which it remains constant. The 0.5-70 keV spectra of the persistent and dip emission are the same within errors and well described by a blackbody (BB), a power-law (PL) with an exponential rolloff, a 10 keV feature, and a 6.7 keV emission feature, all modified by neutral absorption. Assuming that the BB emission originates in an accretion disk, we estimate its inner (magnetospheric) radius to be about 4 x 10(exp 7) cm, which translates to a surface dipole field B approximately 9 x 10(exp 10) G. The Chandra/HETG spectrum resolves the 6.7 keV feature into (quasi-)neutral and highly ionized Fe XXV and Fe XXVI emission lines. XSTAR modeling shows these lines to also emanate from a truncated accretion disk. The burst spectra, with a peak flux more than an order of magnitude higher than Eddington, are well fit with a PL with an exponential rolloff and a 10 keV feature, with similar fit values compared to the persistent and dip spectra. The burst spectra lack a thermal component and any Fe features. Anisotropic (beamed) burst emission would explain both the lack of the BB and any Fe components.
The proposed coding standard at GSFC
NASA Technical Reports Server (NTRS)
Morakis, J. C.; Helgert, H. J.
1977-01-01
As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.
Root-sum-square structural strength verification approach
NASA Technical Reports Server (NTRS)
Lee, Henry M.
1994-01-01
Utilizing a proposed fixture design or some variation thereof, this report presents a verification approach to strength test space flight payload components, electronics boxes, mechanisms, lines, fittings, etc., which traditionally do not lend themselves to classical static loading. The fixture, through use of ordered Euler rotation angles derived herein, can be mounted on existing vibration shakers and can provide an innovative method of applying single axis flight load vectors. The versatile fixture effectively loads protoflight or prototype components in all three axes simultaneously by use of a sinusoidal burst of desired magnitude at less than one-third the first resonant frequency. Cost savings along with improved hardware confidence are shown. The end product is an efficient way to verify experiment hardware for both random vibration and strength.
Inference of median difference based on the Box-Cox model in randomized clinical trials.
Maruo, K; Isogawa, N; Gosho, M
2015-05-10
In randomized clinical trials, many medical and biological measurements are not normally distributed and are often skewed. The Box-Cox transformation is a powerful procedure for comparing two treatment groups for skewed continuous variables in terms of a statistical test. However, it is difficult to directly estimate and interpret the location difference between the two groups on the original scale of the measurement. We propose a helpful method that infers the difference of the treatment effect on the original scale in a more easily interpretable form. We also provide statistical analysis packages that consistently include an estimate of the treatment effect, covariance adjustments, standard errors, and statistical hypothesis tests. The simulation study that focuses on randomized parallel group clinical trials with two treatment groups indicates that the performance of the proposed method is equivalent to or better than that of the existing non-parametric approaches in terms of the type-I error rate and power. We illustrate our method with cluster of differentiation 4 data in an acquired immune deficiency syndrome clinical trial. Copyright © 2015 John Wiley & Sons, Ltd.
A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes.
Vogl, Gregory W; Weiss, Brian A; Donmez, M Alkan
2015-01-01
A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a 'sensor box' to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality.
Aids for the Study of Electromagnetic Blackout
1975-02-25
to gamma rays, 1-MT burst at 20 km 5-32 5-23B One-way absorption due to gamma rays. o-MT hurst at ’O km 5-32 5-23C One-way absorption due to ga’ma...0 2- F ( n)z I I L -I 0. 0.2L.~ INTEGRAL VALUES OF EXPONENT n Figure 6-13. Multiplying factors for refraction and range errors in a spherically
Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid.
Sumida, Iori; Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshikazu; Miyamoto, Youichi; Yamazaki, Hideya; Mabuchi, Nobuhisa; Tsuda, Eimei; Ogawa, Kazuhiko
2016-03-08
Tracking accuracy for the CyberKnife's Synchrony system is commonly evaluated using a film-based verification method. We have evaluated a verification system that uses a webcam and a printed calibrated grid to verify tracking accuracy over three different motion patterns. A box with an attached printed calibrated grid and four fiducial markers was attached to the motion phantom. A target marker was positioned at the grid's center. The box was set up using the other three markers. Target tracking accuracy was evaluated under three conditions: 1) stationary; 2) sinusoidal motion with different amplitudes of 5, 10, 15, and 20 mm for the same cycle of 4 s and different cycles of 2, 4, 6, and 8 s with the same amplitude of 15 mm; and 3) irregular breathing patterns in six human volunteers breathing normally. Infrared markers were placed on the volunteers' abdomens, and their trajectories were used to simulate the target motion. All tests were performed with one-dimensional motion in craniocaudal direction. The webcam captured the grid's motion and a laser beam was used to simulate the CyberKnife's beam. Tracking error was defined as the difference between the grid's center and the laser beam. With a stationary target, mean tracking error was measured at 0.4 mm. For sinusoidal motion, tracking error was less than 2 mm for any amplitude and breathing cycle. For the volunteers' breathing patterns, the mean tracking error range was 0.78-1.67 mm. Therefore, accurate lesion targeting requires individual quality assurance for each patient.
Model identification using stochastic differential equation grey-box models in diabetes.
Duun-Henriksen, Anne Katrine; Schmidt, Signe; Røge, Rikke Meldgaard; Møller, Jonas Bech; Nørgaard, Kirsten; Jørgensen, John Bagterp; Madsen, Henrik
2013-03-01
The acceptance of virtual preclinical testing of control algorithms is growing and thus also the need for robust and reliable models. Models based on ordinary differential equations (ODEs) can rarely be validated with standard statistical tools. Stochastic differential equations (SDEs) offer the possibility of building models that can be validated statistically and that are capable of predicting not only a realistic trajectory, but also the uncertainty of the prediction. In an SDE, the prediction error is split into two noise terms. This separation ensures that the errors are uncorrelated and provides the possibility to pinpoint model deficiencies. An identifiable model of the glucoregulatory system in a type 1 diabetes mellitus (T1DM) patient is used as the basis for development of a stochastic-differential-equation-based grey-box model (SDE-GB). The parameters are estimated on clinical data from four T1DM patients. The optimal SDE-GB is determined from likelihood-ratio tests. Finally, parameter tracking is used to track the variation in the "time to peak of meal response" parameter. We found that the transformation of the ODE model into an SDE-GB resulted in a significant improvement in the prediction and uncorrelated errors. Tracking of the "peak time of meal absorption" parameter showed that the absorption rate varied according to meal type. This study shows the potential of using SDE-GBs in diabetes modeling. Improved model predictions were obtained due to the separation of the prediction error. SDE-GBs offer a solid framework for using statistical tools for model validation and model development. © 2013 Diabetes Technology Society.
Physiology and pathology of saccades and gaze holding.
Shaikh, Aasef G; Ghasia, Fatema F
2013-01-01
Foveation is the fundamental requirement for clear vision. Saccades rapidly shift the gaze to the interesting target while gaze holding ensures foveation of the desired object. We will review the pertinent physiology of saccades and gaze holding and their pathophysiology leading to saccadic oscillations, slow saccades, saccadic dysmetria, and nystagmus. Motor commands for saccades are generated at multiple levels of the neuraxis. The frontal and parietal eye field send saccadic commands to the superior colliculus. Latter then projects to the brain-stem saccadic burst generator. The brain-stem burst generators guarantee optimum signal to ensure rapid saccadic velocity, while the neural integrator, by mathematically integrating the saccadic pulse, facilitates stable gaze holding. Reciprocal innervations that ensure rapid saccadic velocity are prone to inherent instability leading to saccadic oscillations. In contrast, suboptimal function of the burst generators causes slow saccades. Impaired error correction, either at the cerebellum or the inferior olive, leads to impaired saccade adaptation and ultimately saccadic dysmetria and oculopalatal tremor. Impairment in the function of neural integrator causes nystagmus. Neurophysiology of saccades, gaze holding, and their deficits are well recognized. These principles can be implemented to define novel therapeutic and rehabilitation approaches.
NASA Astrophysics Data System (ADS)
Litinski, Daniel; Kesselring, Markus S.; Eisert, Jens; von Oppen, Felix
2017-07-01
We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall-superconductor hybrids.
Mallavarapu, Suma; Stoinski, Tara S; Perdue, Bonnie M; Maple, Terry L
2014-10-01
The nonadjacent double invisible displacement task has been used to test for the ability of different species to mentally represent the unperceived trajectory of an object. The task typically requires three occluders/boxes in a linear array and involves hiding an object in one of two nonadjacent boxes visited in succession. Previous research indicates that 19-, 26-, and 30-month-old children and various nonhuman species cannot solve these displacements. It has been hypothesized that this is because individuals are unable to inhibit searching in the unbaited center box that was never visited by the experimenter. It has been suggested that presenting the task in a large-scale locomotor space might allow individuals to overcome this inhibition problem. In the present study, we tested orangutans on adjacent and nonadjacent double invisible displacements with the traditional setup (experiment 1) and in locomotor space with boxes placed 1.22 m apart (experiment 2). In both experiments, subjects were able to solve adjacent, but not nonadjacent, trials. The failure on nonadjacent trials appeared to be because of an inability to inhibit sequential search on the second choice as well as because of a large number of first-choice errors (directly choosing an incorrect box). The current results support previous findings that orangutans exhibit some constraints when representing the invisible trajectory of objects.
NASA Astrophysics Data System (ADS)
Kirchner, J. W.
2016-01-01
Methods for estimating mean transit times from chemical or isotopic tracers (such as Cl-, δ18O, or δ2H) commonly assume that catchments are stationary (i.e., time-invariant) and homogeneous. Real catchments are neither. In a companion paper, I showed that catchment mean transit times estimated from seasonal tracer cycles are highly vulnerable to aggregation error, exhibiting strong bias and large scatter in spatially heterogeneous catchments. I proposed the young water fraction, which is virtually immune to aggregation error under spatial heterogeneity, as a better measure of transit times. Here I extend this analysis by exploring how nonstationarity affects mean transit times and young water fractions estimated from seasonal tracer cycles, using benchmark tests based on a simple two-box model. The model exhibits complex nonstationary behavior, with striking volatility in tracer concentrations, young water fractions, and mean transit times, driven by rapid shifts in the mixing ratios of fluxes from the upper and lower boxes. The transit-time distribution in streamflow becomes increasingly skewed at higher discharges, with marked increases in the young water fraction and decreases in the mean water age, reflecting the increased dominance of the upper box at higher flows. This simple two-box model exhibits strong equifinality, which can be partly resolved by simple parameter transformations. However, transit times are primarily determined by residual storage, which cannot be constrained through hydrograph calibration and must instead be estimated by tracer behavior. Seasonal tracer cycles in the two-box model are very poor predictors of mean transit times, with typical errors of several hundred percent. However, the same tracer cycles predict time-averaged young water fractions (Fyw) within a few percent, even in model catchments that are both nonstationary and spatially heterogeneous (although they may be biased by roughly 0.1-0.2 at sites where strong precipitation seasonality is correlated with precipitation tracer concentrations). Flow-weighted fits to the seasonal tracer cycles accurately predict the flow-weighted average Fyw in streamflow, while unweighted fits to the seasonal tracer cycles accurately predict the unweighted average Fyw. Young water fractions can also be estimated separately for individual flow regimes, again with a precision of a few percent, allowing direct determination of how shifts in a catchment's hydraulic regime alter the fraction of water reaching the stream by fast flowpaths. One can also estimate the chemical composition of idealized "young water" and "old water" end-members, using relationships between young water fractions and solute concentrations across different flow regimes. These results demonstrate that mean transit times cannot be estimated reliably from seasonal tracer cycles and that, by contrast, the young water fraction is a robust and useful metric of transit times, even in catchments that exhibit strong nonstationarity and heterogeneity.
Takács, Stephen; Kowalski, Pawel; Gries, Gerhard
2016-10-01
Rats are often neophobic and thus do not readily enter trap boxes which are mandated in rodent management to help reduce the risk of accidental poisoning or capture of non-target animals. Working with brown rats, Rattus norvegicus, as a model species, our overall objective was to test whether sound cues from pups could be developed as a means to enhance captures of rats in trap boxes. Recording vocalizations from three-day-old pups after removal from their natal nest with both sonic and ultrasonic microphones revealed frequency components in the sonic range (1.8-7.5 kHz) and ultrasonic range (18-24 kHz, 33-55 kHz, 60-96 kHz). In two-choice laboratory bioassays, playback recordings of these vocalizations induced significant phonotactic and arrestment responses by juvenile, subadult and adult female and male rats. The effectiveness of engineered 'synthetic' rat pup sounds was dependent upon their frequency components, sound durations and the sound delivery system. Unlike other speakers, a piezoelectric transducer emitting sound bursts of 21 kHz with a 63-KHz harmonic, and persisting for 20-300 ms, proved highly effective in attracting and arresting adult female rats. In a field experiment, a battery-powered electronic device fitted with a piezoelectric transducer and driven by an algorithm that randomly generated sound cues resembling those recorded from rat pups and varying in fundamental frequency (19-23 kHz), duration (20-300 ms) and intermittent silence (300-5000 ms) significantly enhanced captures of rats in trap boxes baited with a food lure and soiled bedding material of adult female rats. Our study provides proof of concept that rat-specific sound cues or signals can be effectively reproduced and deployed as a means to enhance capture of wild rats. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Robust Head-Pose Estimation Based on Partially-Latent Mixture of Linear Regressions.
Drouard, Vincent; Horaud, Radu; Deleforge, Antoine; Ba, Sileye; Evangelidis, Georgios
2017-03-01
Head-pose estimation has many applications, such as social event analysis, human-robot and human-computer interaction, driving assistance, and so forth. Head-pose estimation is challenging, because it must cope with changing illumination conditions, variabilities in face orientation and in appearance, partial occlusions of facial landmarks, as well as bounding-box-to-face alignment errors. We propose to use a mixture of linear regressions with partially-latent output. This regression method learns to map high-dimensional feature vectors (extracted from bounding boxes of faces) onto the joint space of head-pose angles and bounding-box shifts, such that they are robustly predicted in the presence of unobservable phenomena. We describe in detail the mapping method that combines the merits of unsupervised manifold learning techniques and of mixtures of regressions. We validate our method with three publicly available data sets and we thoroughly benchmark four variants of the proposed algorithm with several state-of-the-art head-pose estimation methods.
Steam-load-forecasting technique for central-heating plants. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, M.C.; Carnahan, J.V.
Because boilers generally are most efficient at full loads, the Army could achieve significant savings by running fewer boilers at high loads rather than more boilers at low loads. A reliable load prediction technique could help ensure that only those boilers required to meet demand are on line. This report presents the results of an investigation into the feasibility of forecasting heat plant steam loads from historical patterns and weather information. Using steam flow data collected at Fort Benjamin Harrison, IN, a Box-Jenkins transfer function model with an acceptably small prediction error was initially identified. Initial investigation of forecast modelmore » development appeared successful. Dynamic regression methods using actual ambient temperatures yielded the best results. Box-Jenkins univariate models' results appeared slightly less accurate. Since temperature information was not needed for model building and forecasting, however, it is recommended that Box-Jenkins models be considered prime candidates for load forecasting due to their simpler mathematics.« less
Bijsterbosch, Janine D; Lee, Kwang-Hyuk; Hunter, Michael D; Tsoi, Daniel T; Lankappa, Sudheer; Wilkinson, Iain D; Barker, Anthony T; Woodruff, Peter W R
2011-05-01
Our ability to interact physically with objects in the external world critically depends on temporal coupling between perception and movement (sensorimotor timing) and swift behavioral adjustment to changes in the environment (error correction). In this study, we investigated the neural correlates of the correction of subliminal and supraliminal phase shifts during a sensorimotor synchronization task. In particular, we focused on the role of the cerebellum because this structure has been shown to play a role in both motor timing and error correction. Experiment 1 used fMRI to show that the right cerebellar dentate nucleus and primary motor and sensory cortices were activated during regular timing and during the correction of subliminal errors. The correction of supraliminal phase shifts led to additional activations in the left cerebellum and right inferior parietal and frontal areas. Furthermore, a psychophysiological interaction analysis revealed that supraliminal error correction was associated with enhanced connectivity of the left cerebellum with frontal, auditory, and sensory cortices and with the right cerebellum. Experiment 2 showed that suppression of the left but not the right cerebellum with theta burst TMS significantly affected supraliminal error correction. These findings provide evidence that the left lateral cerebellum is essential for supraliminal error correction during sensorimotor synchronization.
SKread predicts handwriting performance in patients with low vision.
Downes, Ken; Walker, Laura L; Fletcher, Donald C
2015-06-01
To assess whether performance on the Smith-Kettlewell Reading (SKread) test is a reliable predictor of handwriting performance in patients with low vision. Cross-sectional study. Sixty-six patients at their initial low-vision rehabilitation evaluation. The patients completed all components of a routine low-vision appointment including logMAR acuity, performed the SKread test, and performed a handwriting task. Patients were timed while performing each task and their accuracy was recorded. The handwriting task was performed by having patients write 5 5-letter words into sets of boxes where each letter is separated by a box. The boxes were 15 × 15 mm, and accuracy was scored with 50 points possible from 25 letters: 1 point for each letter within the confines of a box and 1 point if the letter was legible. Correlation analysis was then performed. Median age of participants was 84 (range 54-97) years. Fifty-seven patients (86%) had age-related macular degeneration or some other maculopathy, whereas 9 patients (14%) had visual impairment from media opacity or neurologic impairment. Median Early Treatment Diabetic Retinopathy Study acuity was 20/133 (range 20/22 to 20/1000), and median logMAR acuity was 0.82 (range 0.04-1.70). SKread errors per block correlated with logMAR acuity (r = 0.6), and SKread time per block correlated with logMAR acuity (r = 0.51). SKread errors per block correlated with handwriting task time/accuracy ratio (r = 0.61). SKread time per block correlated with handwriting task time/accuracy ratio (r = 0.7). LogMAR acuity score correlated with handwriting task time/accuracy ratio (r = 0.42). All p values were < 0.01. SKread scores predict handwriting performance in patients with low vision better than logMAR acuity. Copyright © 2015 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
75 FR 26151 - Proposed Revision of Class E Airspace; Kulik Lake, AK
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-11
...-0270 Airspace Docket No. 10-AAL-8] Proposed Revision of Class E Airspace; Kulik Lake, AK AGENCY... action proposes to revise Class E airspace at Kulik Lake, AK. This action would correct an error in the... Administration, 222 West 7th Avenue, Box 14, Anchorage, AK 99513-7587. FOR FURTHER INFORMATION CONTACT: Gary Rolf...
FEC combined burst-modem for business satellite communications use
NASA Astrophysics Data System (ADS)
Murakami, K.; Miyake, M.; Fuji, T.; Moritani, Y.; Fujino, T.
The authors recently developed two types of FEC (forward error correction) combined modems both applicable to low-data-rate and intermediate-data-rate TDMA international satellite communications. Each FEC combined modem consists of a QPSK (quadrature phase-shift keyed) modem, a convolutional encoder, and a Viterbi decoder. Both modems are designed taking into consideration the fast acquisition of the carrier and bit timing and the low cycle slipping rate in the low-carrier-to-noise-ratio environment. Attention is paid to designing the Viterbi decoder to be operated in a situation in which successive bursts may have different coding rates according to the punctured coding scheme. The overall scheme of the FEC combined modems are presented, and some of the key technologies applied in developing them are outlined. The hardware implementation and experimentation are also discussed. The measured data are compared with results of theoretical analysis, and relatively good performances are obtained.
NASA Technical Reports Server (NTRS)
Shahidi, Anoosh K.; Schlegelmilch, Richard F.; Petrik, Edward J.; Walters, Jerry L.
1991-01-01
A software application to assist end-users of the link evaluation terminal (LET) for satellite communications is being developed. This software application incorporates artificial intelligence (AI) techniques and will be deployed as an interface to LET. The high burst rate (HBR) LET provides 30 GHz transmitting/20 GHz receiving (220/110 Mbps) capability for wideband communications technology experiments with the Advanced Communications Technology Satellite (ACTS). The HBR LET can monitor and evaluate the integrity of the HBR communications uplink and downlink to the ACTS satellite. The uplink HBR transmission is performed by bursting the bit-pattern as a modulated signal to the satellite. The HBR LET can determine the bit error rate (BER) under various atmospheric conditions by comparing the transmitted bit pattern with the received bit pattern. An algorithm for power augmentation will be applied to enhance the system's BER performance at reduced signal strength caused by adverse conditions.
NASA Technical Reports Server (NTRS)
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
Seeking Counterparts to Advanced LIGO/Virgo Transients with Swift
NASA Technical Reports Server (NTRS)
Kanner, Jonah; Camp, Jordan; Racusin, Judith; Gehrels, Neil; White, Darren
2012-01-01
Binary neutron star (NS) mergers are among the most promising astrophysical sources of gravitational wave emission for Advanced LIGO and Advanced Virgo, expected to be operational in 2015 . Finding electromagnetic counterparts to these signals will be essential to placing them in an astronomical context. The Swift satellite carries a sensitive X-ray telescope (XRT), and can respond to target-of-opportunity requests within 1-2 hours, and so is uniquely poised to find the X-ray counterparts to LIGO / Virgo triggers. Assuming NS mergers are the progenitors of short gamma-ray bursts (GRBs), some percentage of LIGO/Virgo triggers will be accompanied by X-ray band afterglows that are brighter than 10(exp -12) ergs/s/sq cm in the XRT band one day after the trigger time. We find that a soft X-ray transient of this flux is bright enough to be extremely rare, and so could be confidently associated with even a moderately localized GW signal. We examine two possible search strategies with the Swift XRT to find bright transients in LIGO/Virgo error boxes. In the first strategy, XRT could search a volume of space with a approx.100 Mpc radius by observing approx 30 galaxies over the course of a day, with sufficient depth to observe the expected X-ray afterglow. For an extended LIGO / Virgo horizon distance, the XRT could employ very short 100 s exposures to cover an area of approx 35 square degrees in about a day, and still be sensitive enough to image GW discovered GRB afterglows. These strategies demonstrate that the high X-ray luminosity of short GRBs and the relatively low X-ray transient background combine to make high confidence discoveries of X-ray band counterparts to GW triggers possible, though challenging, with current satellite facilities.
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are superior in performance compared to other radiosondes, with average 26 km errors of -0.12 hPa or +0.61 percent O3MR error. iMet-P radiosondes had average 26 km errors of -1.95 hPa or +8.75 percent O3MR error. Based on our analysis, we suggest that ozonesondes always be coupled with a GPS-enabled radiosonde and that pressure-dependent variables, such as O3MR, be recalculated-reprocessed using the GPS-measured altitude, especially when 26 km pressure offsets exceed 1.0 hPa 5 percent.
Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid
Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshikazu; Miyamoto, Youichi; Yamazaki, Hideya; Mabuchi, Nobuhisa; Tsuda, Eimei; Ogawa, Kazuhiko
2016-01-01
Tracking accuracy for the CyberKnife's Synchrony system is commonly evaluated using a film‐based verification method. We have evaluated a verification system that uses a webcam and a printed calibrated grid to verify tracking accuracy over three different motion patterns. A box with an attached printed calibrated grid and four fiducial markers was attached to the motion phantom. A target marker was positioned at the grid's center. The box was set up using the other three markers. Target tracking accuracy was evaluated under three conditions: 1) stationary; 2) sinusoidal motion with different amplitudes of 5, 10, 15, and 20 mm for the same cycle of 4 s and different cycles of 2, 4, 6, and 8 s with the same amplitude of 15 mm; and 3) irregular breathing patterns in six human volunteers breathing normally. Infrared markers were placed on the volunteers’ abdomens, and their trajectories were used to simulate the target motion. All tests were performed with one‐dimensional motion in craniocaudal direction. The webcam captured the grid's motion and a laser beam was used to simulate the CyberKnife's beam. Tracking error was defined as the difference between the grid's center and the laser beam. With a stationary target, mean tracking error was measured at 0.4 mm. For sinusoidal motion, tracking error was less than 2 mm for any amplitude and breathing cycle. For the volunteers’ breathing patterns, the mean tracking error range was 0.78‐1.67 mm. Therefore, accurate lesion targeting requires individual quality assurance for each patient. PACS number(s): 87.55.D‐, 87.55.km, 87.55.Qr, 87.56.Fc PMID:27074474
A temperature match based optimization method for daily load prediction considering DLC effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Z.
This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less
Skartland, Liv Kjersti; Mjøs, Svein A; Grung, Bjørn
2011-09-23
The retention behavior of components analyzed by chromatography varies with instrumental settings. Being able to predict how changes in these settings alter the elution pattern is useful, both with regards to component identification, as well as with regards to optimization of the chromatographic system. In this work, it is shown how experimental designs can be used for this purpose. Different experimental designs for response surface modeling of the separation of fatty acid methyl esters (FAME) as function of chromatographic conditions in GC have been evaluated. Full factorial, central composite, Doehlert and Box-Behnken designs were applied. A mixture of 38 FAMEs was separated on a polar cyanopropyl substituted polysilphenylene-siloxane phase capillary column. The temperature gradient, the start temperature of the gradient, and the carrier gas velocity were varied in the experiments. The modeled responses, as functions of chromatographic conditions, were retention time, retention indices, peak widths, separation efficiency and resolution between selected peak pairs. The designs that allowed inclusion of quadratic terms among the predictors performed significantly better than factorial design. Box-Behnken design provided the best results for prediction of retention, but the differences between the central composite, Doehlert and Box-Behnken designs were small. Retention indices could be modeled with much better accuracy than retention times. However, because the errors of predicted tR of closely eluting peaks were highly correlated, models of resolution (Rs) that were based on retention time had errors in the same range as corresponding models based on ECL. Copyright © 2011 Elsevier B.V. All rights reserved.
Full Duplex, Spread Spectrum Radio System
NASA Technical Reports Server (NTRS)
Harvey, Bruce A.
2000-01-01
The goal of this project was to support the development of a full duplex, spread spectrum voice communications system. The assembly and testing of a prototype system consisting of a Harris PRISM spread spectrum radio, a TMS320C54x signal processing development board and a Zilog Z80180 microprocessor was underway at the start of this project. The efforts under this project were the development of multiple access schemes, analysis of full duplex voice feedback delays, and the development and analysis of forward error correction (FEC) algorithms. The multiple access analysis involved the selection between code division multiple access (CDMA), frequency division multiple access (FDMA) and time division multiple access (TDMA). Full duplex voice feedback analysis involved the analysis of packet size and delays associated with full loop voice feedback for confirmation of radio system performance. FEC analysis included studies of the performance under the expected burst error scenario with the relatively short packet lengths, and analysis of implementation in the TMS320C54x digital signal processor. When the capabilities and the limitations of the components used were considered, the multiple access scheme chosen was a combination TDMA/FDMA scheme that will provide up to eight users on each of three separate frequencies. Packets to and from each user will consist of 16 samples at a rate of 8,000 samples per second for a total of 2 ms of voice information. The resulting voice feedback delay will therefore be 4 - 6 ms. The most practical FEC algorithm for implementation was a convolutional code with a Viterbi decoder. Interleaving of the bits of each packet will be required to offset the effects of burst errors.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
1993-01-01
The Communication Protocol Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Communication Protocol Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Communication Protocol Software allows users to control and configure the Intermediate Frequency Switch Matrix (IFSM) on board the ACTS to yield a desired path through the spacecraft payload. Besides IFSM control, the C&PM Software System is also responsible for instrument control during HBR-LET experiments, uplink power control of the HBR-LET to demonstrate power augmentation during signal fade events, and data display. The Communication Protocol Software User's Guide, Version 1.0 (NASA CR-189162) outlines the commands and procedures to install and operate the Communication Protocol Software. Configuration files used to control the IFSM, operator commands, and error recovery procedures are discussed. The Communication Protocol Software Maintenance Manual, Version 1.0 (NASA CR-189163, to be published) is a programmer's guide to the Communication Protocol Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Communication Protocol Software, computer algorithms, format representations, and computer hardware configuration. The Communication Protocol Software Test Plan (NASA CR-189164, to be published) provides a step-by-step procedure to verify the operation of the software. Included in the Test Plan is command transmission, telemetry reception, error detection, and error recovery procedures.
Error analysis for fast scintillator-based inertial confinement fusion burn history measurements
NASA Astrophysics Data System (ADS)
Lerche, R. A.; Ognibene, T. J.
1999-01-01
Plastic scintillator material acts as a neutron-to-light converter in instruments that make inertial confinement fusion burn history measurements. Light output for a detected neutron in current instruments has a fast rise time (<20 ps) and a relatively long decay constant (1.2 ns). For a burst of neutrons whose duration is much shorter than the decay constant, instantaneous light output is approximately proportional to the integral of the neutron interaction rate with the scintillator material. Burn history is obtained by deconvolving the exponential decay from the recorded signal. The error in estimating signal amplitude for these integral measurements is calculated and compared with a direct measurement in which light output is linearly proportional to the interaction rate.
An empirical study of flight control software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a laboratory experiment in flight control software reliability are reported. The experiment tests a small sample of implementations of a pitch axis control law for a PA28 aircraft with over 14 million pitch commands with varying levels of additive input and feedback noise. The testing which uses the method of n-version programming for error detection surfaced four software faults in one implementation of the control law. The small number of detected faults precluded the conduct of the error burst analyses. The pitch axis problem provides data for use in constructing a model in the prediction of the reliability of software in systems with feedback. The study is undertaken to find means to perform reliability evaluations of flight control software.
A brain-machine interface for control of medically-induced coma.
Shanechi, Maryam M; Chemali, Jessica J; Liberman, Max; Solt, Ken; Brown, Emery N
2013-10-01
Medically-induced coma is a drug-induced state of profound brain inactivation and unconsciousness used to treat refractory intracranial hypertension and to manage treatment-resistant epilepsy. The state of coma is achieved by continually monitoring the patient's brain activity with an electroencephalogram (EEG) and manually titrating the anesthetic infusion rate to maintain a specified level of burst suppression, an EEG marker of profound brain inactivation in which bursts of electrical activity alternate with periods of quiescence or suppression. The medical coma is often required for several days. A more rational approach would be to implement a brain-machine interface (BMI) that monitors the EEG and adjusts the anesthetic infusion rate in real time to maintain the specified target level of burst suppression. We used a stochastic control framework to develop a BMI to control medically-induced coma in a rodent model. The BMI controlled an EEG-guided closed-loop infusion of the anesthetic propofol to maintain precisely specified dynamic target levels of burst suppression. We used as the control signal the burst suppression probability (BSP), the brain's instantaneous probability of being in the suppressed state. We characterized the EEG response to propofol using a two-dimensional linear compartment model and estimated the model parameters specific to each animal prior to initiating control. We derived a recursive Bayesian binary filter algorithm to compute the BSP from the EEG and controllers using a linear-quadratic-regulator and a model-predictive control strategy. Both controllers used the estimated BSP as feedback. The BMI accurately controlled burst suppression in individual rodents across dynamic target trajectories, and enabled prompt transitions between target levels while avoiding both undershoot and overshoot. The median performance error for the BMI was 3.6%, the median bias was -1.4% and the overall posterior probability of reliable control was 1 (95% Bayesian credibility interval of [0.87, 1.0]). A BMI can maintain reliable and accurate real-time control of medically-induced coma in a rodent model suggesting this strategy could be applied in patient care.
Neonatal Hippocampal Damage Impairs Specific Food/Place Associations in Adult Macaques
Glavis-Bloom, Courtney; Alvarado, Maria C.; Bachevalier, Jocelyne
2013-01-01
This study describes a novel spatial memory paradigm for monkeys and reports the effects of neonatal damage to the hippocampus on performance in adulthood. Monkeys were trained to forage in eight boxes hung on the walls of a large enclosure. Each box contained a different food item that varied in its intrinsic reward value, as determined from food preference testing. Monkeys were trained on a spatial and a cued version of the task. In the spatial task, the boxes looked identical and remained fixed in location whereas in the cued task, the boxes were individuated with colored plaques and changed location on each trial. Ten adult Rhesus macaques (5 neonatal sham-operated and 5 with neonatal neurotoxic hippocampal lesions) were allowed to forage once daily until they preferentially visited boxes containing preferred foods. The data suggest that all monkeys learned to discriminate preferred from nonpreferred food locations, but that monkeys with neonatal hippocampal damage committed significantly more working memory errors than controls in both tasks. Furthermore, following selective satiation, controls altered their foraging pattern to avoid the satiated food, whereas lesioned animals did not, suggesting that neonatal hippocampal lesions prohibit learning of specific food-place associations. We conclude that whereas an intact hippocampus is necessary to form specific item-in-place associations, in its absence, cortical areas may support more broad distinctions between food types that allow monkeys to discriminate places containing highly preferred foods. PMID:23398438
NASA Technical Reports Server (NTRS)
Evans, P. A.; Osborne, J. P.; Kennea, J. A.; Campana, S.; O'Brien, P. T.; Tanvir, N. R.; Racusin, J. L.; Burrows, D. N.; Cenko, S. B.; Gehrels, N.
2015-01-01
One of the most exciting near-term prospects in physics is the potential discovery of gravitational waves by the Advanced LIGO and Virgo detectors. To maximize both the confidence of the detection and the science return, it is essential to identify an electromagnetic counterpart.This is not trivial, as the events are expected to be poorly localized, particularly in the near-term, with error regions covering hundreds or even thousands of square degrees. In this paper, we discuss the prospects for finding an X-ray counterpart to a gravitational wave trigger with the Swift X-ray Telescope, using the assumption that the trigger is caused by a binary neutron star merger which also produces a short gamma-ray burst. We show that it is beneficial to target galaxies within the GW error region, highlighting the need for substantially complete galaxy catalogues out to distances of 300 Mpc. We also show that nearby, on-axis short GRBs are either extremely rare, or are systematically less luminous than those detected to date. We consider the prospects for detecting afterglow emission from an off-axis GRB which triggered the GW facilities, finding that the detectability, and the best time to look,are strongly dependent on the characteristics of the burst such as circumburst density and our viewing angle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savchenko, V.; Ferrigno, C.; Bozzo, E.
We report the INTernational Gamma-ray Astrophysics Laboratory ( INTEGRAL ) detection of the short gamma-ray burst GRB 170817A (discovered by Fermi -GBM) with a signal-to-noise ratio of 4.6, and, for the first time, its association with the gravitational waves (GWs) from binary neutron star (BNS) merging event GW170817 detected by the LIGO and Virgo observatories. The significance of association between the gamma-ray burst observed by INTEGRAL and GW170817 is 3.2σ, while the association between the Fermi -GBM and INTEGRAL detections is 4.2σ. GRB 170817A was detected by the SPI-ACS instrument about 2 s after the end of the GW event.more » We measure a fluence of (1.4 ± 0.4 ± 0.6) × 10{sup −7} erg cm{sup −2} (75–2000 keV), where, respectively, the statistical error is given at the 1σ confidence level, and the systematic error corresponds to the uncertainty in the spectral model and instrument response. We also report on the pointed follow-up observations carried out by INTEGRAL , starting 19.5 hr after the event, and lasting for 5.4 days. We provide a stringent upper limit on any electromagnetic signal in a very broad energy range, from 3 keV to 8 MeV, constraining the soft gamma-ray afterglow flux to <7.1 × 10{sup −11} erg cm{sup −2} s{sup −1} (80–300 keV). Exploiting the unique capabilities of INTEGRAL , we constrained the gamma-ray line emission from radioactive decays that are expected to be the principal source of the energy behind a kilonova event following a BNS coalescence. Finally, we put a stringent upper limit on any delayed bursting activity, for example, from a newly formed magnetar.« less
ERIC Educational Resources Information Center
Saperstein, Aliya
2006-01-01
Social constructivist theories of race suggest no two measures of race will capture the same information, but the degree of "error" this creates for quantitative research on inequality is unclear. Using unique data from the General Social Survey, I find observed and self-reported measures of race yield substantively different results when used to…
Fast Plasma Investigation for MMS: Simulation of the Burst Triggering System
NASA Technical Reports Server (NTRS)
Barrie, A. C.; Dorelli, J. C.; Winkert, G. E.; Lobell, J. V.; Holland, M. P.; Adrian, M. L.; Pollock, C. J.
2011-01-01
The Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6 degree x 180 degree fields-of-view (FOV) are set 90 degrees apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45 degree x 180 degree fan about the its nominal viewing (0 deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb (raised dot) per second of electron data while the DIS generates 1.1-Mb (raised dot) per second of ion data yielding an FPI total data rate of 6.6-Mb (raised dot) per second. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. This requires a data ranking process known as the burst trigger system. The burst trigger system uses pseudo physical quantities to approximate the local plasma environments. As each pseudo quantity will have a different value, a set of two scaling factors is employed for each pseudo term. These pseudo quantities are then combined at the instrument, spacecraft, and observatory level leading to a final ranking of data based on expected scientific interest. Here, we present simulations of the fixed point burst trigger system for the FPI. A variety of data sets based on previous mission data as well as analytical formulations are tested. Comparisons of floating point calculations versus the fixed point hardware simulation are shown. Analysis of the potential sources of error from overflows, quantization, etc. are examined and mitigation methods are presented. Finally a series of calibration curves are presented, showing the expected error in pseudo quantities based solely on the scale parameters chosen and the expected data range. We conclude with a presentation of the current base-lined FPI burst trigger approach.
Geometrical verification system using Adobe Photoshop in radiotherapy.
Ishiyama, Hiromichi; Suzuki, Koji; Niino, Keiji; Hosoya, Takaaki; Hayakawa, Kazushige
2005-02-01
Adobe Photoshop is used worldwide and is useful for comparing portal films with simulation films. It is possible to scan images and then view them simultaneously with this software. The purpose of this study was to assess the accuracy of a geometrical verification system using Adobe Photoshop. We prepared the following two conditions for verification. Under one condition, films were hanged on light boxes, and examiners measured distances between the isocenter on simulation films and that on portal films by adjusting the bony structures. Under the other condition, films were scanned into a computer and displayed using Adobe Photoshop, and examiners measured distances between the isocenter on simulation films and those on portal films by adjusting the bony structures. To obtain control data, lead balls were used as a fiducial point for matching the films accurately. The errors, defined as the differences between the control data and the measurement data, were assessed. Errors of the data obtained using Adobe Photoshop were significantly smaller than those of the data obtained from films on light boxes (p < 0.007). The geometrical verification system using Adobe Photoshop is available on any PC with this software and is useful for improving the accuracy of verification.
Ji, Chengdong; Guo, Xuan; Li, Zhen; Qian, Shuwen; Zheng, Feng; Qin, Haiqing
2013-01-01
Many studies have been conducted on colorectal anastomotic leakage to reduce the incidence of anastomotic leakage. However, how to precisely determine if the bowel can withstand the pressure of a colorectal anastomosis experiment, which is called anastomotic bursting pressure, has not been determined. A task force developed the experimental animal hollow organ mechanical testing system to provide precise measurement of the maximum pressure that an anastomotic colon can withstand, and to compare it with the commonly used method such as the mercury and air bag pressure manometer in a rat colon rupture pressure test. Forty-five male Sprague-Dawley rats were randomly divided into the manual ball manometry (H) group, the tracing machine manometry pressure gauge head (MP) group, and the experimental animal hollow organ mechanical testing system (ME) group. The rats in each group were subjected to a cut colon rupture pressure test after injecting anesthesia in the tail vein. Colonic end-to-end anastomosis was performed, and the rats were rested for 1 week before anastomotic bursting pressure was determined by one of the three methods. No differences were observed between the normal colon rupture pressure and colonic anastomotic bursting pressure, which were determined using the three manometry methods. However, several advantages, such as reduction in errors, were identified in the ME group. Different types of manometry methods can be applied to the normal rat colon, but the colonic anastomotic bursting pressure test using the experimental animal hollow organ mechanical testing system is superior to traditional methods. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Validation of community models: 3. Tracing field lines in heliospheric models
NASA Astrophysics Data System (ADS)
MacNeice, Peter; Elliott, Brian; Acebal, Ariel
2011-10-01
Forecasting hazardous gradual solar energetic particle (SEP) bursts at Earth requires accurately modeling field line connections between Earth and the locations of coronal or interplanetary shocks that accelerate the particles. We test the accuracy of field lines reconstructed using four different models of the ambient coronal and inner heliospheric magnetic field, through which these shocks must propagate, including the coupled Wang-Sheeley-Arge (WSA)/ENLIL model. Evaluating the WSA/ENLIL model performance is important since it is the most sophisticated model currently available to space weather forecasters which can model interplanetary coronal mass ejections and, when coupled with particle acceleration and transport models, will provide a complete model for gradual SEP bursts. Previous studies using a simpler Archimedean spiral approach above 2.5 solar radii have reported poor performance. We test the accuracy of the model field lines connecting Earth to the Sun at the onset times of 15 impulsive SEP bursts, comparing the foot points of these field lines with the locations of surface events believed to be responsible for the SEP bursts. We find the WSA/ENLIL model performance is no better than the simplest spiral model, and the principal source of error is the model's inability to reproduce sufficient low-latitude open flux. This may be due to the model's use of static synoptic magnetograms, which fail to account for transient activity in the low corona, during which reconnection events believed to initiate the SEP acceleration may contribute short-lived open flux at low latitudes. Time-dependent coronal models incorporating these transient events may be needed to significantly improve Earth/Sun field line forecasting.
Analysis of large system black box verification test data
NASA Technical Reports Server (NTRS)
Clapp, Kenneth C.; Iyer, Ravishankar Krishnan
1993-01-01
Issues regarding black box, large systems verification are explored. It begins by collecting data from several testing teams. An integrated database containing test, fault, repair, and source file information is generated. Intuitive effectiveness measures are generated using conventional black box testing results analysis methods. Conventional analysts methods indicate that the testing was effective in the sense that as more tests were run, more faults were found. Average behavior and individual data points are analyzed. The data is categorized and average behavior shows a very wide variation in number of tests run and in pass rates (pass rates ranged from 71 percent to 98 percent). The 'white box' data contained in the integrated database is studied in detail. Conservative measures of effectiveness are discussed. Testing efficiency (ratio of repairs to number of tests) is measured at 3 percent, fault record effectiveness (ratio of repairs to fault records) is measured at 55 percent, and test script redundancy (ratio of number of failed tests to minimum number of tests needed to find the faults) ranges from 4.2 to 15.8. Error prone source files and subsystems are identified. A correlational mapping of test functional area to product subsystem is completed. A new adaptive testing process based on real-time generation of the integrated database is proposed.
The development of a reliable amateur boxing performance analysis template.
Thomson, Edward; Lamb, Kevin; Nicholas, Ceri
2013-01-01
The aim of this study was to devise a valid performance analysis system for the assessment of the movement characteristics associated with competitive amateur boxing and assess its reliability using analysts of varying experience of the sport and performance analysis. Key performance indicators to characterise the demands of an amateur contest (offensive, defensive and feinting) were developed and notated using a computerised notational analysis system. Data were subjected to intra- and inter-observer reliability assessment using median sign tests and calculating the proportion of agreement within predetermined limits of error. For all performance indicators, intra-observer reliability revealed non-significant differences between observations (P > 0.05) and high agreement was established (80-100%) regardless of whether exact or the reference value of ±1 was applied. Inter-observer reliability was less impressive for both analysts (amateur boxer and experienced analyst), with the proportion of agreement ranging from 33-100%. Nonetheless, there was no systematic bias between observations for any indicator (P > 0.05), and the proportion of agreement within the reference range (±1) was 100%. A reliable performance analysis template has been developed for the assessment of amateur boxing performance and is available for use by researchers, coaches and athletes to classify and quantify the movement characteristics of amateur boxing.
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, W; Yang, H; Wang, Y
2014-06-01
Purpose: To investigate the impact of different clipbox volumes with automated registration techniques using commercially available software with on board volumetric imaging(OBI) for treatment verification in cervical cancer patients. Methods: Fifty cervical cancer patients received daily CBCT scans(on-board imaging v1.5 system, Varian Medical Systems) during the first treatment week and weekly thereafter were included this analysis. A total of 450 CBCT scans were registered to the planning CTscan using pelvic clipbox(clipbox-Pelvic) and around PTV clip box(clipbox- PTV). The translations(anterior-posterior, left-right, superior-inferior) and the rotations(yaw, pitch and roll) errors for each matches were recorded. The setup errors and the systematic andmore » random errors for both of the clip-boxes were calculated. Paired Samples t test was used to analysis the differences between clipbox-Pelvic and clipbox-PTV. Results: . The SD of systematic error(σ) was 1.0mm, 2.0mm,3.2mm and 1.9mm,2.3mm, 3.0mm in the AP, LR and SI directions for clipbox-Pelvic and clipbox-PTV, respectively. The average random error(Σ)was 1.7mm, 2.0mm,4.2mm and 1.7mm,3.4mm, 4.4mm in the AP, LR and SI directions for clipbox-Pelvic and clipbox-PTV, respectively. But, only the SI direction was acquired significantly differences between two image registration volumes(p=0.002,p=0.01 for mean and SD). For rotations, the yaw mean/SD and the pitch SD were acquired significantly differences between clipbox-Pelvic and clipbox-PTV. Conclusion: The defined volume for Image registration is important for cervical cancer when 3D/3D match was used. The alignment clipbox can effect the setup errors obtained. Further analysis is need to determine the optimal defined volume to use the image registration in cervical cancer. Conflict of interest: none.« less
The Achromatic Light Curve of the Optical Afterglow of GRB 030226 at a Redshift of z Approximately 2
NASA Technical Reports Server (NTRS)
Klose, S.; Greiner, J.; Rau, A.; Henden, A. A.; Hartmann, D. H.; Zeh, A.; Masetti, N.; Guenther, E.; Stecklum, B.; Lindsay, K.
2003-01-01
Abstract. We report on optical and near-infrared (NIR) follow-up observations of the afterglow of GRB 030226, mainly performed with the telescopes at ESO La Silla and Paranal, with additional data obtained at other places. Our first observations started 0.2 days after the burst when the afterglow was at a magnitude of R approximately equal to 19 . One week later the magnitude of the afterglow had fallen to R=25, and at two weeks after the burst it could no longer be detected (R > 26). Our VLT blueband spectra show two absorption line systems at redshifts z = 1.962 +/- 0.001 and at z = 1.986 +/- 0.001, placing the redshift of the burster close to 2. Within our measurement errors no evidence for variations in the line strengths has been found between 0.2 and 1.2 days after the burst. An overabundance of alpha-group elements might indicate that the burst occurred in a chemically young interstellar region shaped by the nucleosynthesis from type II supernovae. The spectral slope of the afterglow shows no signs for cosmic dust along the line of sight in the GRB host galaxy, which itself remained undetected (R > 26.2). At the given redshift no supernova component affected the light from the GRB afterglow, so that the optical transient was essentially only powered by the radiation from the GRB fireball, allowing for a detailed investigation of the color evolution of the afterglow light. In our data set no obvious evidence for color changes has been found before, during, or after the smooth break in the light curve approximately 1 day after the burst. In comparison with investigations by others, our data favor the interpretation that the afterglow began to develop into a homogeneous interstellar medium before the break in the light curve became apparent.
Cscibox: A Software System for Age-Model Construction and Evaluation
NASA Astrophysics Data System (ADS)
Bradley, E.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; White, J. W. C.; Anderson, D. M.
2014-12-01
CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmetal archives, both directly dated and cross dated. The time has come to encourage cross-pollinization between earth science and computer science in dating paleorecords. This project addresses that need. The CSciBox code, which is being developed by a team of computer scientists and geoscientists, is open source and freely available on github. The system employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form. This makes it possible to do analysis on the whole core at once, in an interactive fashion, or to tailor the analysis to a subset of the core without loading the entire data file. CSciBox provides a number of 'components' that perform the common steps in age-model construction and evaluation: calibrations, reservoir-age correction, interpolations, statistics, and so on. The user employs these components via a graphical user interface (GUI) to go from raw data to finished age model in a single tool: e.g., an IntCal09 calibration of 14C data from a marine sediment core, followed by a piecewise-linear interpolation. CSciBox's GUI supports plotting of any measurement in the core against any other measurement, or against any of the variables in the calculation of the age model-with or without explicit error representations. Using the GUI, CSciBox's user can import a new calibration curve or other background data set and define a new module that employs that information. Users can also incorporate other software (e.g., Calib, BACON) as 'plug ins.' In the case of truly large data or significant computational effort, CSciBox is parallelizable across modern multicore processors, or clusters, or even the cloud. The next generation of the CSciBox code, currently in the testing stages, includes an automated reasoning engine that supports a more-thorough exploration of plausible age models and cross-dating scenarios.
The design and analysis of single flank transmission error tester for loaded gears
NASA Technical Reports Server (NTRS)
Bassett, Duane E.; Houser, Donald R.
1987-01-01
To strengthen the understanding of gear transmission error and to verify mathematical models which predict them, a test stand that will measure the transmission error of gear pairs under design loads has been investigated. While most transmission error testers have been used to test gear pairs under unloaded conditions, the goal of this report was to design and perform dynamic analysis of a unique tester with the capability of measuring the transmission error of gears under load. This test stand will have the capability to continuously load a gear pair at torques up to 16,000 in-lb at shaft speeds from 0 to 5 rpm. Error measurement will be accomplished with high resolution optical encoders and the accompanying signal processing unit from an existing unloaded transmission error tester. Input power to the test gear box will be supplied by a dc torque motor while the load will be applied with a similar torque motor. A dual input, dual output control system will regulate the speed and torque of the system. This control system's accuracy and dynamic response were analyzed and it was determined that proportional plus derivative speed control is needed in order to provide the precisely constant torque necessary for error-free measurement.
Dosage delivery of sensitive reagents enables glove-box-free synthesis
NASA Astrophysics Data System (ADS)
Sather, Aaron C.; Lee, Hong Geun; Colombe, James R.; Zhang, Anni; Buchwald, Stephen L.
2015-08-01
Contemporary organic chemists employ a broad range of catalytic and stoichiometric methods to construct molecules for applications in the material sciences, and as pharmaceuticals, agrochemicals, and sensors. The utility of a synthetic method may be greatly reduced if it relies on a glove box to enable the use of air- and moisture-sensitive reagents or catalysts. Furthermore, many synthetic chemistry laboratories have numerous containers of partially used reagents that have been spoiled by exposure to the ambient atmosphere. This is exceptionally wasteful from both an environmental and a cost perspective. Here we report an encapsulation method for stabilizing and storing air- and moisture-sensitive compounds. We demonstrate this approach in three contexts, by describing single-use capsules that contain all of the reagents (catalysts, ligands, and bases) necessary for the glove-box-free palladium-catalysed carbon-fluorine, carbon-nitrogen, and carbon-carbon bond-forming reactions. This strategy should reduce the number of error-prone, tedious and time-consuming weighing procedures required for such syntheses and should be applicable to a wide range of reagents, catalysts, and substrate combinations.
Azin, Mahdieh; Zangiabadi, Nasser; Iranmanesh, Farhad; Baneshi, Mohammad Reza; Banihashem, Seyedshahab
2016-10-01
Intermittent theta burst stimulation (iTBS) is a repetitive transcranial magnetic stimulation (rTMS) protocol that influences cortical excitability and motor function recovery. This study aimed to investigate the effects of iTBS on manual dexterity and hand motor imagery in multiple sclerosis (MS) patients. Thirty-six MS patients were non-randomly assigned into sham (control) or iTBS groups. Then, iTBS was delivered to the primary motor cortex for ten days over two consecutive weeks. The patients' manual dexterity was assessed using the nine-hole peg test (9HPT) and the Box and Block Test (BBT), while the hand motor imagery was assessed with the hand mental rotation task (HMRT). iTBS group showed a reduction in the time required to complete the 9HPT (mean difference = -3.05, P = 0.002), and an increase in the number of blocks transferred in one minute in the BBT (mean difference = 8.9, P = 0.001) when compared to the control group. Furthermore, there was no significant difference between the two groups in terms of the reaction time (P = 0.761) and response accuracy rate (P = 0.482) in the HMRT. When iTBS was applied over the primary motor cortex, it significantly improved manual dexterity, but had no significant effect on the hand motor imagery ability in MS patients.
A large-scale dynamo and magnetoturbulence in rapidly rotating core-collapse supernovae.
Mösta, Philipp; Ott, Christian D; Radice, David; Roberts, Luke F; Schnetter, Erik; Haas, Roland
2015-12-17
Magnetohydrodynamic turbulence is important in many high-energy astrophysical systems, where instabilities can amplify the local magnetic field over very short timescales. Specifically, the magnetorotational instability and dynamo action have been suggested as a mechanism for the growth of magnetar-strength magnetic fields (of 10(15) gauss and above) and for powering the explosion of a rotating massive star. Such stars are candidate progenitors of type Ic-bl hypernovae, which make up all supernovae that are connected to long γ-ray bursts. The magnetorotational instability has been studied with local high-resolution shearing-box simulations in three dimensions, and with global two-dimensional simulations, but it is not known whether turbulence driven by this instability can result in the creation of a large-scale, ordered and dynamically relevant field. Here we report results from global, three-dimensional, general-relativistic magnetohydrodynamic turbulence simulations. We show that hydromagnetic turbulence in rapidly rotating protoneutron stars produces an inverse cascade of energy. We find a large-scale, ordered toroidal field that is consistent with the formation of bipolar magnetorotationally driven outflows. Our results demonstrate that rapidly rotating massive stars are plausible progenitors for both type Ic-bl supernovae and long γ-ray bursts, and provide a viable mechanism for the formation of magnetars. Moreover, our findings suggest that rapidly rotating massive stars might lie behind potentially magnetar-powered superluminous supernovae.
NASA Technical Reports Server (NTRS)
Bejczy, A. K.; Brown, J. W.; Lewis, J. L.
1982-01-01
An enhanced proximity sensor and display system was developed at the Jet Propulsion Laboratory (JPL) and tested on the full scale Space Shuttle Remote Manipulator at the Johnson Space Center (JSC) Manipulator Development Facility (MDF). The sensor system, integrated with a four-claw end effector, measures range error up to 6 inches, and pitch and yaw alignment errors within + or 15 deg., and displays error data on both graphic and numeric displays. The errors are referenced to the end effector control axes through appropriate data processing by a dedicated microcomputer acting on the sensor data in real time. Both display boxes contain a green lamp which indicates whether the combination of range, pitch and yaw errors will assure a successful grapple. More than 200 test runs were completed in early 1980 by three operators at JSC for grasping static and capturing slowly moving targets. The tests have indicated that the use of graphic/numeric displays of proximity sensor information improves precision control of grasp/capture range by more than a factor of two for both static and dynamic grapple conditions.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
NASA Astrophysics Data System (ADS)
Wang, Heming; Liu, Yu; Song, Yongchen; Zhao, Yuechao; Zhao, Jiafei; Wang, Dayong
2012-11-01
Pore structure is one of important factors affecting the properties of porous media, but it is difficult to describe the complexity of pore structure exactly. Fractal theory is an effective and available method for quantifying the complex and irregular pore structure. In this paper, the fractal dimension calculated by box-counting method based on fractal theory was applied to characterize the pore structure of artificial cores. The microstructure or pore distribution in the porous material was obtained using the nuclear magnetic resonance imaging (MRI). Three classical fractals and one sand packed bed model were selected as the experimental material to investigate the influence of box sizes, threshold value, and the image resolution when performing fractal analysis. To avoid the influence of box sizes, a sequence of divisors of the image was proposed and compared with other two algorithms (geometric sequence and arithmetic sequence) with its performance of partitioning the image completely and bringing the least fitted error. Threshold value selected manually and automatically showed that it plays an important role during the image binary processing and the minimum-error method can be used to obtain an appropriate or reasonable one. Images obtained under different pixel matrices in MRI were used to analyze the influence of image resolution. Higher image resolution can detect more quantity of pore structure and increase its irregularity. With benefits of those influence factors, fractal analysis on four kinds of artificial cores showed the fractal dimension can be used to distinguish the different kinds of artificial cores and the relationship between fractal dimension and porosity or permeability can be expressed by the model of D = a - bln(x + c).
Insight in the Brain: The Cognitive and Neural Bases of Eureka Moments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beeman, Mark
Where do new ideas come from? Although all new ideas build on old, this can happen in different ways. Some new ideas, or solutions to old problems, are achieved through methodical, analytical processing. Other new ideas come about in a sudden burst of insight, often based on or generating a restructured view of the problem itself. Behavioral, brain imaging, and eye-tracking results all reveal distinct cortical networks contributing to insight solving, as contrasted with analytic solving. Consistently, the way in which people solve problems appears to relate to the way they engage attention and cognitive control: across time, across moods,more » and across individuals. Insight is favored when people can disengage from strong stimuli and associations - figuratively and literally looking "outside the box" of the problem to suddenly solve with a new idea.« less
2009-11-19
CAPE CANAVERAL, Fla. – At the Astronaut Hall of Fame near NASA’s Kennedy Space Center in Florida, participants in the 2009 Astronaut Glove Challenge, part of NASA’s Centennial Challenges Program, pose for a group portrait. In the center of the front row are the winners, Ted Southern of Brooklyn, N.Y., at left, and Peter Homer of Southwest Harbor, Maine. The nationwide competition focused on developing improved pressure suit gloves for astronauts to use while working in space. During the challenge, the gloves were submitted to burst tests, joint force tests and tests to measure their dexterity and strength during operation in a glove box which simulates the vacuum of space. Centennial Challenges is NASA’s program of technology prizes for the citizen-inventor. The winning prize for the Glove Challenge is $250,000 provided by the Centennial Challenges Program. Photo credit: NASA/Kim Shiflett
Epileptic seizures, coma and EEG burst-suppression from suicidal bupropion intoxication.
Noda, Anna Hiro; Schu, Ulrich; Maier, Tanja; Knake, Susanne; Rosenow, Felix
2017-03-01
Bupropion, an amphetamine-like dual mechanism drug, is approved and increasingly used for the treatment of major depression, and its use is associated with a dose-dependent risk of epileptic seizures. Suicide attempts are frequent in major depression and often an overdose of the drugs available is ingested. Therefore, it is important to be aware of the clinical course, including EEG and neurological symptoms, as well as treatment and prognosis of bupropion intoxication. We report on the clinical and EEG course of a women who ingested 27 g of bupropion in a suicide attempt. Myoclonic seizures were followed by generalized tonic-clonic seizures and coma associated with EEG burst-suppression and brief tonic seizures. Active carbon and neuro-intensive care treatment, including respiratory support, were given. Within three days, the patient returned to a stable clinical condition with a mildly encephalopathic EEG. In conclusion, bupropion intoxication requires acute intensive care treatment and usually has a good prognosis, however, misinterpretation of the clinical and EEG presentation may lead to errors in management.
NASA Technical Reports Server (NTRS)
Choung, Youn H.; Wong, William C.
1986-01-01
The design of the ACTS multibeam antenna is described, and its performance is evaluated. The multibeam antenna is designed to cover the continential U.S. and provides three fixed spot beams for high burst rate operations and two scanning beams for low burst rate operations. The antenna has one main reflector, a dual polarized subreflector, and two orthogonal feed assemblies. The feed system is to receive a linearly polarized communication signal from 28.9-30.0 GHz and to provide the elevation and azimuth error tracking signals at 29.975 GHz with a 0.01 deg tracking accuracy. The feed system uses a single multiflare conical horn and a multimode coupler to provide a symmetric primary pattern for the communication signal. The sidelobe characteristics of the reflector, and the relation between the sidelobe level and surface distortion are studied. It is noted that the performance measurements for the multibeam antenna correlate well with predictions for secondary patterns and scan characteristics.
Lipid Adjustment for Chemical Exposures: Accounting for Concomitant Variables
Li, Daniel; Longnecker, Matthew P.; Dunson, David B.
2013-01-01
Background Some environmental chemical exposures are lipophilic and need to be adjusted by serum lipid levels before data analyses. There are currently various strategies that attempt to account for this problem, but all have their drawbacks. To address such concerns, we propose a new method that uses Box-Cox transformations and a simple Bayesian hierarchical model to adjust for lipophilic chemical exposures. Methods We compared our Box-Cox method to existing methods. We ran simulation studies in which increasing levels of lipid-adjusted chemical exposure did and did not increase the odds of having a disease, and we looked at both single-exposure and multiple-exposures cases. We also analyzed an epidemiology dataset that examined the effects of various chemical exposures on the risk of birth defects. Results Compared with existing methods, our Box-Cox method produced unbiased estimates, good coverage, similar power, and lower type-I error rates. This was the case in both single- and multiple-exposure simulation studies. Results from analysis of the birth-defect data differed from results using existing methods. Conclusion Our Box-Cox method is a novel and intuitive way to account for the lipophilic nature of certain chemical exposures. It addresses some of the problems with existing methods, is easily extendable to multiple exposures, and can be used in any analyses that involve concomitant variables. PMID:24051893
NASA Astrophysics Data System (ADS)
Jiang, Yicheng; Cheng, Ping; Ou, Yangkui
2001-09-01
A new method for target classification of high-range resolution radar is proposed. It tries to use neural learning to obtain invariant subclass features of training range profiles. A modified Euclidean metric based on the Box-Cox transformation technique is investigated for Nearest Neighbor target classification improvement. The classification experiments using real radar data of three different aircraft have demonstrated that classification error can reduce 8% if this method proposed in this paper is chosen instead of the conventional method. The results of this paper have shown that by choosing an optimized metric, it is indeed possible to reduce the classification error without increasing the number of samples.
Critical Code: Software Producibility for Defense
2010-01-01
the hazard of a single system failing can often be associated with a much larger aggregate of systems, often spread across a wide geography. The...four words: fault, error, failure, and hazard . These are defined and illustrated in Box 4.2. Information Loss and Traceability As noted above, the...design information when payback is uncertain, diffuse , or most likely far in the future. A goal in formulating incentive models that motivate developer
Officer Career Development: Reactions of Two Unrestricted Line Communities to Detailers
1987-08-01
self - esteem scale ( Rosenberg , 1979) (Cronbach alpha = .82). Evaluation of Job History (Box 2)^ 1. "What is your evaluation of the following...rating scales , which are vulnerable to "leniency error" (Kerlinger, 1965 ). That is, constituents may have evaluated detailers more favorably than they...communication in bargaining. Human Communication Research, 8^, 262-280. Rosenberg , M. (1979). Conceiving the self . New York: Basic Books. Turnbull, A. A
Optical photometry of TX0506+056
NASA Astrophysics Data System (ADS)
Keel, William; Santander, Marcos
2017-10-01
The blazar TX0506+056 has attracted recent attention through its location in the error box of a high-energy Ice Cube neutrino detection (https://gcn.gsfc.nasa.gov/gcn3/21916.gcn3) and gamma-ray flaring (Atel #10791) We report recent photometry of TX0506+056 obtained in Johnson V and Cousins R passbands using the 1-meter Kapteyn telescope at La Palma, operated remotely by the SARA consortium.
VLT/X-Shooter spectrum of the blazar TXS 0506+056 (located inside the IceCube-170922A error box)
NASA Astrophysics Data System (ADS)
Coleiro, Alexis; Chaty, Sylvain
2017-10-01
The blazar TXS 0506+056 (PMN J0509+0541) is currently reported to show increased gamma-ray and optical activity (ATel #10791, #10792, #10794, #10799, #10801, #10817, #10830, #10831, #10838) and has been proposed as the counterpart to the high-energy neutrino event IceCube-170922A (https://gcn.gsfc.nasa.gov/notices_amon/50579430_130033.amon).
NASA Technical Reports Server (NTRS)
Russell, Samuel S.; Lansing, Matthew D.
1997-01-01
This effort used a new and novel method of acquiring strains called Sub-pixel Digital Video Image Correlation (SDVIC) on impact damaged Kevlar/epoxy filament wound pressure vessels during a proof test. To predict the burst pressure, the hoop strain field distribution around the impact location from three vessels was used to train a neural network. The network was then tested on additional pressure vessels. Several variations on the network were tried. The best results were obtained using a single hidden layer. SDVIC is a fill-field non-contact computer vision technique which provides in-plane deformation and strain data over a load differential. This method was used to determine hoop and axial displacements, hoop and axial linear strains, the in-plane shear strains and rotations in the regions surrounding impact sites in filament wound pressure vessels (FWPV) during proof loading by internal pressurization. The relationship between these deformation measurement values and the remaining life of the pressure vessels, however, requires a complex theoretical model or numerical simulation. Both of these techniques are time consuming and complicated. Previous results using neural network methods had been successful in predicting the burst pressure for graphite/epoxy pressure vessels based upon acoustic emission (AE) measurements in similar tests. The neural network associates the character of the AE amplitude distribution, which depends upon the extent of impact damage, with the burst pressure. Similarly, higher amounts of impact damage are theorized to cause a higher amount of strain concentration in the damage effected zone at a given pressure and result in lower burst pressures. This relationship suggests that a neural network might be able to find an empirical relationship between the SDVIC strain field data and the burst pressure, analogous to the AE method, with greater speed and simplicity than theoretical or finite element modeling. The process of testing SDVIC neural network analysis and some encouraging preliminary results are presented in this paper. Details are given concerning the processing of SDVIC output data such that it may be used as back propagation neural network (BPNN) input data. The software written to perform this processing and the BPNN algorithm are also discussed. It will be shown that, with limited training, test results indicate an average error in burst pressure prediction of approximately six percent,
Effect of rain gauge density over the accuracy of rainfall: a case study over Bangalore, India.
Mishra, Anoop Kumar
2013-12-01
Rainfall is an extremely variable parameter in both space and time. Rain gauge density is very crucial in order to quantify the rainfall amount over a region. The level of rainfall accuracy is highly dependent on density and distribution of rain gauge stations over a region. Indian Space Research Organisation (ISRO) have installed a number of Automatic Weather Station (AWS) rain gauges over Indian region to study rainfall. In this paper, the effect of rain gauge density over daily accumulated rainfall is analyzed using ISRO AWS gauge observations. A region of 50 km × 50 km box over southern part of Indian region (Bangalore) with good density of rain gauges is identified for this purpose. Rain gauge numbers are varied from 1-8 in 50 km box to study the variation in the daily accumulated rainfall. Rainfall rates from the neighbouring stations are also compared in this study. Change in the rainfall as a function of gauge spacing is studied. Use of gauge calibrated satellite observations to fill the gauge station value is also studied. It is found that correlation coefficients (CC) decrease from 82% to 21% as gauge spacing increases from 5 km to 40 km while root mean square error (RMSE) increases from 8.29 mm to 51.27 mm with increase in gauge spacing from 5 km to 40 km. Considering 8 rain gauges as a standard representative of rainfall over the region, absolute error increases from 15% to 64% as gauge numbers are decreased from 7 to 1. Small errors are reported while considering 4 to 7 rain gauges to represent 50 km area. However, reduction to 3 or less rain gauges resulted in significant error. It is also observed that use of gauge calibrated satellite observations significantly improved the rainfall estimation over the region with very few rain gauge observations.
O'Connor, Kelly M; Rittenhouse, Chadwick D; Millspaugh, Joshua J; Rittenhouse, Tracy A G
2015-01-01
Box turtles (Terrapene carolina) are widely distributed but vulnerable to population decline across their range. Using distance sampling, morphometric data, and an index of carapace damage, we surveyed three-toed box turtles (Terrapene carolina triunguis) at 2 sites in central Missouri, and compared differences in detection probabilities when transects were walked by one or two observers. Our estimated turtle densities within forested cover was less at the Thomas S. Baskett Wildlife Research and Education Center, a site dominated by eastern hardwood forest (d = 1.85 turtles/ha, 95% CI [1.13, 3.03]) than at the Prairie Fork Conservation Area, a site containing a mix of open field and hardwood forest (d = 4.14 turtles/ha, 95% CI [1.99, 8.62]). Turtles at Baskett were significantly older and larger than turtles at Prairie Fork. Damage to the carapace did not differ significantly between the 2 populations despite the more prevalent habitat management including mowing and prescribed fire at Prairie Fork. We achieved improved estimates of density using two rather than one observer at Prairie Fork, but negligible differences in density estimates between the two methods at Baskett. Error associated with probability of detection decreased at both sites with the addition of a second observer. We provide demographic data on three-toed box turtles that suggest the use of a range of habitat conditions by three-toed box turtles. This case study suggests that habitat management practices and their impacts on habitat composition may be a cause of the differences observed in our focal populations of turtles.
Henri Becquerel: serendipitous brilliance
NASA Astrophysics Data System (ADS)
Margaritondo, Giorgio
2008-06-01
Serendipity has always been an attendant to great science. Arno Penzias and Robert Wilson discovered the cosmic background radiation after first mistaking it for the effect of pigeon droppings on their microwave antenna. US spy satellites detected gamma-ray bursts when surveying the sky for evidence of secret Soviet nuclear tests during the Cold War. Satyendra Bose arrived at Bose-Einstein statistics only after discovering that a mathematical error explained the experimental data concerning the photoelectric effect. In the words of science-fiction writer Isaac Asimov, "The most exciting phrase in science is not 'Eureka!', but rather, 'That's funny...'.
Lausch, Ekkehart; Hermanns, Pia; Farin, Henner F; Alanay, Yasemin; Unger, Sheila; Nikkel, Sarah; Steinwender, Christoph; Scherer, Gerd; Spranger, Jürgen; Zabel, Bernhard; Kispert, Andreas; Superti-Furga, Andrea
2008-11-01
Members of the evolutionarily conserved T-box family of transcription factors are important players in developmental processes that include mesoderm formation and patterning and organogenesis both in vertebrates and invertebrates. The importance of T-box genes for human development is illustrated by the association between mutations in several of the 17 human family members and congenital errors of morphogenesis that include cardiac, craniofacial, and limb malformations. We identified two unrelated individuals with a complex cranial, cervical, auricular, and skeletal malformation syndrome with scapular and pelvic hypoplasia (Cousin syndrome) that recapitulates the dysmorphic phenotype seen in the Tbx15-deficient mice, droopy ear. Both affected individuals were homozygous for genomic TBX15 mutations that resulted in truncation of the protein and addition of a stretch of missense amino acids. Although the mutant proteins had an intact T-box and were able to bind to their target DNA sequence in vitro, the missense amino acid sequence directed them to early degradation, and cellular levels were markedly reduced. We conclude that Cousin syndrome is caused by TBX15 insufficiency and is thus the human counterpart of the droopy ear mouse.
Predicting neural network firing pattern from phase resetting curve
NASA Astrophysics Data System (ADS)
Oprisan, Sorinel; Oprisan, Ana
2007-04-01
Autonomous neural networks called central pattern generators (CPG) are composed of endogenously bursting neurons and produce rhythmic activities, such as flying, swimming, walking, chewing, etc. Simplified CPGs for quadrupedal locomotion and swimming are modeled by a ring of neural oscillators such that the output of one oscillator constitutes the input for the subsequent neural oscillator. The phase response curve (PRC) theory discards the detailed conductance-based description of the component neurons of a network and reduces them to ``black boxes'' characterized by a transfer function, which tabulates the transient change in the intrinsic period of a neural oscillator subject to external stimuli. Based on open-loop PRC, we were able to successfully predict the phase-locked period and relative phase between neurons in a half-center network. We derived existence and stability criteria for heterogeneous ring neural networks that are in good agreement with experimental data.
NASA Astrophysics Data System (ADS)
Watkins, N. W.; Rypdal, M.; Lovsletten, O.
2012-12-01
For all natural hazards, the question of when the next "extreme event" (c.f. Taleb's "black swans") is expected is of obvious importance. In the environmental sciences users often frame such questions in terms of average "return periods", e.g. "is an X meter rise in the Thames water level a 1-in-Y year event ?". Frequently, however, we also care about the emergence of correlation, and whether the probability of several big events occurring in close succession is truly independent, i.e. are the black swans "bunched". A "big event", or a "burst", defined by its integrated signal above a threshold, might be a single, very large, event, or, instead, could in fact be a correlated series of "smaller" (i.e. less wildly fluctuating) events. Several available stochastic approaches provide quantitative information about such bursts, including Extreme Value Theory (EVT); the theory of records; level sets; sojourn times; and models of space-time "avalanches" of activity in non-equilibrium systems. Some focus more on the probability of single large events. Others are more concerned with extended dwell times above a given spatiotemporal threshold: However, the state of the art is not yet fully integrated, and the above-mentioned approaches differ in fundamental aspects. EVT is perhaps the best known in the geosciences. It is concerned with the distribution obeyed by the extremes of datasets, e.g. the 100 values obtained by considering the largest daily temperature recorded in each of the years of a century. However, the pioneering work from the 1920s on which EVT originally built was based on independent identically distributed samples, and took no account of memory and correlation that characterise many natural hazard time series. Ignoring this would fundamentally limit our ability to forecast; so much subsequent activity has been devoted to extending EVT to encompass dependence. A second group of approaches, by contrast, has notions of time and thus possible non-stationarity explicitly built in. In record breaking statistics, a record is defined in the sense used in everyday language, to be the largest value yet recorded in a time series, for example, the 2004 Sumatran Boxing Day earthquake was at the time the largest to be digitally recorded. The third group of approaches (e.g. avalanches) are explicitly spatiotemporal and so also include spatial structure. This presentation will discuss two examples of our recent work on the burst problem. We will show numerical results extending the preliminary results presented in [Watkins et al, PRE, 2009] using a standard additive model, linear fractional stable motion (LFSM). LFSM explicitly includes both heavy tails and long range dependence, allowing us to study how these 2 effects compete in determining the burst duration and size exponent probability distributions. We will contrast these simulations with new analytical studies of bursts in a multiplicative process, the multifractal random walk (MRW). We will present an analytical derivation for the scaling of the burst durations and make a preliminary comparison with data from the AE index from solar-terrestrial physics. We believe our result is more generally applicable than the MRW model, and that it applies to a broad class of multifractal processes.
Measurement of radon concentration in water using the portable radon survey meter.
Yokoyama, S; Mori, N; Shimo, M; Fukushi, M; Ohnuma, S
2011-07-01
A measurement method for measuring radon in water using the portable radon survey meter (RnSM) was developed. The container with propeller was used to stir the water samples and release radon from the water into the air in a sample box of the RnSM. In this method, the measurement of error would be <20 %, when the radon concentration in the mineral water was >20 Bq l(-1).
Two-Dimensional Arrays of Neutral Atom Quantum Gates
2012-10-20
Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS quantum computing , Rydberg atoms, entanglement Mark Saffman University of...Nature Physics, (01 2009): 0. doi: 10.1038/nphys1178 10/19/2012 9.00 K. Mølmer, M. Saffman. Scaling the neutral-atom Rydberg gate quantum computer by...Saffman, E. Brion, K. Mølmer. Error Correction in Ensemble Registers for Quantum Repeaters and Quantum Computers , Physical Review Letters, (3 2008): 0
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George
2016-04-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).
IPTV multicast with peer-assisted lossy error control
NASA Astrophysics Data System (ADS)
Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd
2010-07-01
Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.
Azin, Mahdieh; Zangiabadi, Nasser; Iranmanesh, Farhad; Baneshi, Mohammad Reza; Banihashem, Seyedshahab
2016-01-01
Background Intermittent theta burst stimulation (iTBS) is a repetitive transcranial magnetic stimulation (rTMS) protocol that influences cortical excitability and motor function recovery. Objectives This study aimed to investigate the effects of iTBS on manual dexterity and hand motor imagery in multiple sclerosis (MS) patients. Methods Thirty-six MS patients were non-randomly assigned into sham (control) or iTBS groups. Then, iTBS was delivered to the primary motor cortex for ten days over two consecutive weeks. The patients’ manual dexterity was assessed using the nine-hole peg test (9HPT) and the Box and Block Test (BBT), while the hand motor imagery was assessed with the hand mental rotation task (HMRT). Results iTBS group showed a reduction in the time required to complete the 9HPT (mean difference = -3.05, P = 0.002), and an increase in the number of blocks transferred in one minute in the BBT (mean difference = 8.9, P = 0.001) when compared to the control group. Furthermore, there was no significant difference between the two groups in terms of the reaction time (P = 0.761) and response accuracy rate (P = 0.482) in the HMRT. Conclusions When iTBS was applied over the primary motor cortex, it significantly improved manual dexterity, but had no significant effect on the hand motor imagery ability in MS patients. PMID:28180015
POWERLIB: SAS/IML Software for Computing Power in Multivariate Linear Models
Johnson, Jacqueline L.; Muller, Keith E.; Slaughter, James C.; Gurka, Matthew J.; Gribbin, Matthew J.; Simpson, Sean L.
2014-01-01
The POWERLIB SAS/IML software provides convenient power calculations for a wide range of multivariate linear models with Gaussian errors. The software includes the Box, Geisser-Greenhouse, Huynh-Feldt, and uncorrected tests in the “univariate” approach to repeated measures (UNIREP), the Hotelling Lawley Trace, Pillai-Bartlett Trace, and Wilks Lambda tests in “multivariate” approach (MULTIREP), as well as a limited but useful range of mixed models. The familiar univariate linear model with Gaussian errors is an important special case. For estimated covariance, the software provides confidence limits for the resulting estimated power. All power and confidence limits values can be output to a SAS dataset, which can be used to easily produce plots and tables for manuscripts. PMID:25400516
Neural network versus classical time series forecasting models
NASA Astrophysics Data System (ADS)
Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam
2017-05-01
Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.
Jensen, Katrine; Ringsted, Charlotte; Hansen, Henrik Jessen; Petersen, René Horsleben; Konge, Lars
2014-06-01
Video-assisted thoracic surgery is gradually replacing conventional open thoracotomy as the method of choice for the treatment of early-stage non-small cell lung cancers, and thoracic surgical trainees must learn and master this technique. Simulation-based training could help trainees overcome the first part of the learning curve, but no virtual-reality simulators for thoracoscopy are commercially available. This study aimed to investigate whether training on a laparoscopic simulator enables trainees to perform a thoracoscopic lobectomy. Twenty-eight surgical residents were randomized to either virtual-reality training on a nephrectomy module or traditional black-box simulator training. After a retention period they performed a thoracoscopic lobectomy on a porcine model and their performance was scored using a previously validated assessment tool. The groups did not differ in age or gender. All participants were able to complete the lobectomy. The performance of the black-box group was significantly faster during the test scenario than the virtual-reality group: 26.6 min (SD 6.7 min) versus 32.7 min (SD 7.5 min). No difference existed between the two groups when comparing bleeding and anatomical and non-anatomical errors. Simulation-based training and targeted instructions enabled the trainees to perform a simulated thoracoscopic lobectomy. Traditional black-box training was more effective than virtual-reality laparoscopy training. Thus, a dedicated simulator for thoracoscopy should be available before establishing systematic virtual-reality training programs for trainees in thoracic surgery.
Völter, Christoph J; Call, Josep
2012-09-01
What kind of information animals use when solving problems is a controversial topic. Previous research suggests that, in some situations, great apes prefer to use causally relevant cues over arbitrary ones. To further examine to what extent great apes are able to use information about causal relations, we presented three different puzzle box problems to the four nonhuman great ape species. Of primary interest here was a comparison between one group of apes that received visual access to the functional mechanisms of the puzzle boxes and one group that did not. Apes' performance in the first two, less complex puzzle boxes revealed that they are able to solve such problems by means of trial-and-error learning, requiring no information about the causal structure of the problem. However, visual inspection of the functional mechanisms of the puzzle boxes reduced the amount of time needed to solve the problems. In the case of the most complex problem, which required the use of a crank, visual feedback about what happened when the handle of the crank was turned was necessary for the apes to solve the task. Once the solution was acquired, however, visual feedback was no longer required. We conclude that visual feedback about the consequences of their actions helps great apes to solve complex problems. As the crank task matches the basic requirements of vertical string pulling in birds, the present results are discussed in light of recent findings with corvids.
Sexual orientation and spatial memory.
Cánovas, Ma Rosa; Cimadevilla, José Manuel
2011-11-01
The present study aimed at determining the influence of sexual orientation in human spatial learning and memory. Participants performed the Boxes Room, a virtual reality version of the Holeboard. In Experiment I, a reference memory task, the position of the hidden rewards remained constant during the whole experiment. In Experiment II, a working memory task, the position of rewards changed between blocks. Each block consisted of two trials: One trial for acquisition and another for retrieval. The results of Experiment I showed that heterosexual men performed better than homosexual men and heterosexual women. They found the rewarded boxes faster. Moreover, homosexual participants committed more errors than heterosexuals. Experiment II showed that working memory abilities are the same in groups of different sexual orientation. These results suggest that sexual orientation is related to spatial navigation abilities, but mostly in men, and limited to reference memory, which depends more on the function of the hippocampal system.
Bandwidth efficient CCSDS coding standard proposals
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan
1992-01-01
The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.
NASA Astrophysics Data System (ADS)
Charania, A.
2002-01-01
The envisioned future may include continuous operating outposts and networks on other worlds supporting human and robotic exploration. Given this possibility, a feasibility analysis is performed of a communications architecture based upon reflection of ion trails from meteors in planetary atmospheres. Meteor Burst (MB) communication systems use meteoritic impacts on planetary atmospheres as two-way, short burst communication nodes. MB systems consist of semi-continuous, low bandwidth networks. These systems possess both long distance capability (hundred of kilometers) and have lower susceptibility to atmospheric perturbations. Every day millions of meteors come into Earth's upper atmosphere with enough energy to ionize gas molecules suitably to reflect radio waves and facilitate communications beyond line of site. The ionized trail occurs at altitudes of 100 km with lengths reaching 30 km. The trial sustains itself long enough to support typical network distances of 1800 km. The initial step to use meteors in this fashion includes detection of a usable ionic trail. A probe signal is sent from one station to another in the network. If there is a meteor trail present, the probe signal is reflected to a receiving station. When another station receives the probe signal, it sends an acknowledgement to the originating station to proceed with transfer on that trail in a high-speed digital data burst. This probe-main signal handshaking occurs each time a burst of data is sent and can occur several times over the course of just one useable meteor trail. Given the need for non-data sending probe signals and error correcting bits; typical transmission data rates vary from a few kilobits per second to over 100 kilobits per second. On Earth, MB links open up hundreds of time per hour depending upon daily and seasonal variations. Meteor bursts were first noticed in detail in the 1930s. With the capabilities of modern computer processing, MB systems have become both technically feasible and commercially viable for selected applications on Earth. Terrestrial applications currently include weather monitoring, river monitoring, transport tracking, emergency detection, two-way messaging, and vehicle performance monitoring. Translation of such a system beyond Earth requires an atmosphere; therefore Martian analogues of such a system are presented. Such systems could support planetary mobility (for humans and robots), weather stations, and emergency communications while minimizing the need for massive orbital telecommunication constellations. For this investigation, a conceptual Meteor Burst (MB) communication architecture is developed to assess potential viability in supporting planetary exploration missions on Mars. Current terrestrial systems are extrapolated to generate candidate network architectures for selected science applications. Technology road mapping activities are also performed on these architectures.
A new method for determining the optimal lagged ensemble
DelSole, T.; Tippett, M. K.; Pegion, K.
2017-01-01
Abstract We propose a general methodology for determining the lagged ensemble that minimizes the mean square forecast error. The MSE of a lagged ensemble is shown to depend only on a quantity called the cross‐lead error covariance matrix, which can be estimated from a short hindcast data set and parameterized in terms of analytic functions of time. The resulting parameterization allows the skill of forecasts to be evaluated for an arbitrary ensemble size and initialization frequency. Remarkably, the parameterization also can estimate the MSE of a burst ensemble simply by taking the limit of an infinitely small interval between initialization times. This methodology is applied to forecasts of the Madden Julian Oscillation (MJO) from version 2 of the Climate Forecast System version 2 (CFSv2). For leads greater than a week, little improvement is found in the MJO forecast skill when ensembles larger than 5 days are used or initializations greater than 4 times per day. We find that if the initialization frequency is too infrequent, important structures of the lagged error covariance matrix are lost. Lastly, we demonstrate that the forecast error at leads ≥10 days can be reduced by optimally weighting the lagged ensemble members. The weights are shown to depend only on the cross‐lead error covariance matrix. While the methodology developed here is applied to CFSv2, the technique can be easily adapted to other forecast systems. PMID:28580050
Modeling the Swift BAT Trigger Algorithm with Machine Learning
NASA Technical Reports Server (NTRS)
Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori
2015-01-01
To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.
Modeling the Swift Bat Trigger Algorithm with Machine Learning
NASA Technical Reports Server (NTRS)
Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori
2016-01-01
To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.
Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori
2006-06-12
The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.
Adaptive data rate control TDMA systems as a rain attenuation compensation technique
NASA Technical Reports Server (NTRS)
Sato, Masaki; Wakana, Hiromitsu; Takahashi, Takashi; Takeuchi, Makoto; Yamamoto, Minoru
1993-01-01
Rainfall attenuation has a severe effect on signal strength and impairs communication links for future mobile and personal satellite communications using Ka-band and millimeter wave frequencies. As rain attenuation compensation techniques, several methods such as uplink power control, site diversity, and adaptive control of data rate or forward error correction have been proposed. In this paper, we propose a TDMA system that can compensate rain attenuation by adaptive control of transmission rates. To evaluate the performance of this TDMA terminal, we carried out three types of experiments: experiments using a Japanese CS-3 satellite with Ka-band transponders, in house IF loop-back experiments, and computer simulations. Experimental results show that this TDMA system has advantages over the conventional constant-rate TDMA systems, as resource sharing technique, in both bit error rate and total TDMA burst lengths required for transmitting given information.
A multi-ring optical packet and circuit integrated network with optical buffering.
Furukawa, Hideaki; Shinada, Satoshi; Miyazawa, Takaya; Harai, Hiroaki; Kawasaki, Wataru; Saito, Tatsuhiko; Matsunaga, Koji; Toyozumi, Tatuya; Wada, Naoya
2012-12-17
We newly developed a 3 × 3 integrated optical packet and circuit switch-node. Optical buffers and burst-mode erbium-doped fiber amplifiers with the gain flatness are installed in the 3 × 3 switch-node. The optical buffer can prevent packet collisions and decrease packet loss. We constructed a multi-ring optical packet and circuit integrated network testbed connecting two single-ring networks and a client network by the 3 × 3 switch-node. For the first time, we demonstrated 244 km fiber transmission and 5-node hopping of multiplexed 14-wavelength 10 Gbps optical paths and 100 Gbps optical packets encapsulating 10 Gigabit Ethernet frames on the testbed. Error-free (frame error rate < 1 × 10(-4)) operation was achieved with optical packets of various packet lengths. In addition, successful avoidance of packet collisions by optical buffers was confirmed.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Software Security Knowledge: Training
2011-05-01
eliminating those erro~rs. It can be found at http:ffcwe.mitre.org/top25. Any programmer who writes C’Ode \\r-Vith~out betng aware of those proble ~ms a·nd...time on security. Ultimately, these reasons stem from an underlying problem in the software market . B~cause software is essentially a black·box, it is...security of software and start to effect change in the software market . Nevertheless, we still frequently get pushback when we advocate for security
The HEAO A-1 X Ray Source Catalog (Wood Et Al. 1984): Documentation for the Machine-Readable Version
NASA Technical Reports Server (NTRS)
Warren, Wayne H., Jr.
1990-01-01
The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The catalog is a compilation of data for 842 sources detected with the U.S. Naval Research Laboratory Large Area Sky Survey Experiment flown aboard the HEAO 1 satellite. The data include source identifications, positions, error boxes, mean X-ray intensities, and cross identifications to other source designations.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.
1992-01-01
The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document.
Psychophysics of complex auditory and speech stimuli
NASA Astrophysics Data System (ADS)
Pastore, Richard E.
1993-10-01
A major focus on the primary project is the use of different procedures to provide converging evidence on the nature of perceptual spaces for speech categories. Completed research examined initial voiced consonants, with results providing strong evidence that different stimulus properties may cue a phoneme category in different vowel contexts. Thus, /b/ is cued by a rising second format (F2) with the vowel /a/, requiring both F2 and F3 to be rising with /i/, and is independent of the release burst for these vowels. Furthermore, cues for phonetic contrasts are not necessarily symmetric, and the strong dependence of prior speech research on classification procedures may have led to errors. Thus, the opposite (falling F2 and F3) transitions lead somewhat ambiguous percepts (i.e., not /b/) which may be labeled consistently (as /d/ or /g/), but requires a release burst to achieve high category quality and similarity to category exemplars. Ongoing research is examining cues in other vowel contexts and issuing procedures to evaluate the nature of interaction between cues for categories of both speech and music.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genova, Alessandro, E-mail: alessandro.genova@rutgers.edu; Pavanello, Michele, E-mail: m.pavanello@rutgers.edu; Ceresoli, Davide, E-mail: davide.ceresoli@cnr.it
2016-06-21
In this work we achieve three milestones: (1) we present a subsystem DFT method capable of running ab-initio molecular dynamics simulations accurately and efficiently. (2) In order to rid the simulations of inter-molecular self-interaction error, we exploit the ability of semilocal frozen density embedding formulation of subsystem DFT to represent the total electron density as a sum of localized subsystem electron densities that are constrained to integrate to a preset, constant number of electrons; the success of the method relies on the fact that employed semilocal nonadditive kinetic energy functionals effectively cancel out errors in semilocal exchange–correlation potentials that aremore » linked to static correlation effects and self-interaction. (3) We demonstrate this concept by simulating liquid water and solvated OH{sup •} radical. While the bulk of our simulations have been performed on a periodic box containing 64 independent water molecules for 52 ps, we also simulated a box containing 256 water molecules for 22 ps. The results show that, provided one employs an accurate nonadditive kinetic energy functional, the dynamics of liquid water and OH{sup •} radical are in semiquantitative agreement with experimental results or higher-level electronic structure calculations. Our assessments are based upon comparisons of radial and angular distribution functions as well as the diffusion coefficient of the liquid.« less
Learning Kinematic Constraints in Laparoscopic Surgery
Huang, Felix C.; Mussa-Ivaldi, Ferdinando A.; Pugh, Carla M.; Patton, James L.
2012-01-01
To better understand how kinematic variables impact learning in surgical training, we devised an interactive environment for simulated laparoscopic maneuvers, using either 1) mechanical constraints typical of a surgical “box-trainer” or 2) virtual constraints in which free hand movements control virtual tool motion. During training, the virtual tool responded to the absolute position in space (Position-Based) or the orientation (Orientation-Based) of a hand-held sensor. Volunteers were further assigned to different sequences of target distances (Near-Far-Near or Far-Near-Far). Training with the Orientation-Based constraint enabled much lower path error and shorter movement times during training, which suggests that tool motion that simply mirrors joint motion is easier to learn. When evaluated in physically constrained (physical box-trainer) conditions, each group exhibited improved performance from training. However, Position-Based training enabled greater reductions in movement error relative to Orientation-Based (mean difference: 14.0 percent; CI: 0.7, 28.6). Furthermore, the Near-Far-Near schedule allowed a greater decrease in task time relative to the Far-Near-Far sequence (mean −13:5 percent, CI: −19:5, −7:5). Training that focused on shallow tool insertion (near targets) might promote more efficient movement strategies by emphasizing the curvature of tool motion. In addition, our findings suggest that an understanding of absolute tool position is critical to coping with mechanical interactions between the tool and trocar. PMID:23293709
Voice recognition software can be used for scientific articles.
Pommergaard, Hans-Christian; Huang, Chenxi; Burcharth, Jacob; Rosenberg, Jacob
2015-02-01
Dictation of scientific articles has been recognised as an efficient method for producing high-quality, first article drafts. However, standardised transcription service by a secretary may not be available for all researchers and voice recognition software (VRS) may therefore be an alternative. The purpose of this study was to evaluate the out-of-the-box accuracy of VRS. Eleven young researchers without dictation experience dictated the first draft of their own scientific article after thorough preparation according to a pre-defined schedule. The dictate transcribed by VRS was compared with the same dictate transcribed by an experienced research secretary, and the effect of adding words to the vocabulary of the VRS was investigated. The number of errors per hundred words was used as outcome. Furthermore, three experienced researchers assessed the subjective readability using a Likert scale (0-10). Dragon Nuance Premium version 12.5 was used as VRS. The median number of errors per hundred words was 18 (range: 8.5-24.3), which improved when 15,000 words were added to the vocabulary. Subjective readability assessment showed that the texts were understandable with a median score of five (range: 3-9), which was improved with the addition of 5,000 words. The out-of-the-box performance of VRS was acceptable and improved after additional words were added. Further studies are needed to investigate the effect of additional software accuracy training.
Genova, Alessandro; Ceresoli, Davide; Pavanello, Michele
2016-06-21
In this work we achieve three milestones: (1) we present a subsystem DFT method capable of running ab-initio molecular dynamics simulations accurately and efficiently. (2) In order to rid the simulations of inter-molecular self-interaction error, we exploit the ability of semilocal frozen density embedding formulation of subsystem DFT to represent the total electron density as a sum of localized subsystem electron densities that are constrained to integrate to a preset, constant number of electrons; the success of the method relies on the fact that employed semilocal nonadditive kinetic energy functionals effectively cancel out errors in semilocal exchange-correlation potentials that are linked to static correlation effects and self-interaction. (3) We demonstrate this concept by simulating liquid water and solvated OH(•) radical. While the bulk of our simulations have been performed on a periodic box containing 64 independent water molecules for 52 ps, we also simulated a box containing 256 water molecules for 22 ps. The results show that, provided one employs an accurate nonadditive kinetic energy functional, the dynamics of liquid water and OH(•) radical are in semiquantitative agreement with experimental results or higher-level electronic structure calculations. Our assessments are based upon comparisons of radial and angular distribution functions as well as the diffusion coefficient of the liquid.
Gurusamy, Kurinchi Selvan; Nagendran, Myura; Toon, Clare D; Davidson, Brian R
2014-03-01
Surgical training has traditionally been one of apprenticeship, where the surgical trainee learns to perform surgery under the supervision of a trained surgeon. This is time consuming, costly, and of variable effectiveness. Training using a box model physical simulator is an option to supplement standard training. However, the value of this modality on trainees with limited prior laparoscopic experience is unknown. To compare the benefits and harms of box model training for surgical trainees with limited prior laparoscopic experience versus standard surgical training or supplementary animal model training. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index Expanded to May 2013. We planned to include all randomised clinical trials comparing box model trainers versus other forms of training including standard laparoscopic training and supplementary animal model training in surgical trainees with limited prior laparoscopic experience. We also planned to include trials comparing different methods of box model training. Two authors independently identified trials and collected data. We analysed the data with both the fixed-effect and the random-effects models using Review Manager 5. For each outcome, we calculated the risk ratio (RR), mean difference (MD), or standardised mean difference (SMD) with 95% confidence intervals (CI) based on intention-to-treat analysis whenever possible. We identified eight trials that met the inclusion criteria. One trial including 17 surgical trainees did not contribute to the meta-analysis. We included seven trials (249 surgical trainees belonging to various postgraduate years ranging from year one to four) in which the participants were randomised to supplementary box model training (122 trainees) versus standard training (127 trainees). Only one trial (50 trainees) was at low risk of bias. The box trainers used in all the seven trials were video trainers. Six trials were conducted in USA and one trial in Canada. The surgeries in which the final assessments were made included laparoscopic total extraperitoneal hernia repairs, laparoscopic cholecystectomy, laparoscopic tubal ligation, laparoscopic partial salpingectomy, and laparoscopic bilateral mid-segment salpingectomy. The final assessments were made on a single operative procedure.There were no deaths in three trials (0/82 (0%) supplementary box model training versus 0/86 (0%) standard training; RR not estimable; very low quality evidence). The other trials did not report mortality. The estimated effect on serious adverse events was compatible with benefit and harm (three trials; 168 patients; 0/82 (0%) supplementary box model training versus 1/86 (1.1%) standard training; RR 0.36; 95% CI 0.02 to 8.43; very low quality evidence). None of the trials reported patient quality of life. The operating time was significantly shorter in the supplementary box model training group versus the standard training group (1 trial; 50 patients; MD -6.50 minutes; 95% CI -10.85 to -2.15). The proportion of patients who were discharged as day-surgery was significantly higher in the supplementary box model training group versus the standard training group (1 trial; 50 patients; 24/24 (100%) supplementary box model training versus 15/26 (57.7%) standard training; RR 1.71; 95% CI 1.23 to 2.37). None of the trials reported trainee satisfaction. The operating performance was significantly better in the supplementary box model training group versus the standard training group (seven trials; 249 trainees; SMD 0.84; 95% CI 0.57 to 1.10).None of the trials compared box model training versus animal model training or versus different methods of box model training. There is insufficient evidence to determine whether laparoscopic box model training reduces mortality or morbidity. There is very low quality evidence that it improves technical skills compared with standard surgical training in trainees with limited previous laparoscopic experience. It may also decrease operating time and increase the proportion of patients who were discharged as day-surgery in the first total extraperitoneal hernia repair after box model training. However, the duration of the benefit of box model training is unknown. Further well-designed trials of low risk of bias and random errors are necessary. Such trials should assess the long-term impact of box model training on clinical outcomes and compare box training with other forms of training.
InSAR time series analysis of ALOS-2 ScanSAR data and its implications for NISAR
NASA Astrophysics Data System (ADS)
Liang, C.; Liu, Z.; Fielding, E. J.; Huang, M. H.; Burgmann, R.
2017-12-01
The JAXA's ALOS-2 mission was launched on May 24, 2014. It operates at L-band and can acquire data in multiple modes. ScanSAR is the main operational mode and has a 350 km swath, somewhat larger than the 250 km swath of the SweepSAR mode planned for the NASA-ISRO SAR (NISAR) mission. ALOS-2 has been acquiring a wealth of L-band InSAR data. These data are of particular value in areas of dense vegetation and high relief. The InSAR technical development for ALOS-2 also enables the preparation for the upcoming NISAR mission. We have been developing advanced InSAR processing techniques for ALOS-2 over the past two years. Here, we report the important issues for doing InSAR time series analysis using ALOS-2 ScanSAR data. First, we present ionospheric correction techniques for both regular ScanSAR InSAR and MAI (multiple aperture InSAR) ScanSAR InSAR. We demonstrate the large-scale ionospheric signals in the ScanSAR interferograms. They can be well mitigated by the correction techniques. Second, based on our technical development of burst-by-burst InSAR processing for ALOS-2 ScanSAR data, we find that the azimuth Frequency Modulation (FM) rate error is an important issue not only for MAI, but also for regular InSAR time series analysis. We identify phase errors caused by azimuth FM rate errors during the focusing process of ALOS-2 product. The consequence is mostly a range ramp in the InSAR time series result. This error exists in all of the time series results we have processed. We present the correction techniques for this error following a theoretical analysis. After corrections, we present high quality ALOS-2 ScanSAR InSAR time series results in a number of areas. The development for ALOS-2 can provide important implications for NISAR mission. For example, we find that in most cases the relative azimuth shift caused by ionosphere can be as large as 4 m in a large area imaged by ScanSAR. This azimuth shift is half of the 8 m azimuth resolution of the SweepSAR mode planned for NISAR, which implies that a good coregistration strategy for NISAR's SweepSAR mode is geometrical coregistration followed by MAI or spectral diversity analysis. Besides, our development also provides implications for the processing and system parameter requirements of NISAR, such as the accuracy requirement of azimuth FM rate and range timing.
Matter power spectrum and the challenge of percent accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Aurel; Teyssier, Romain; Potter, Doug
2016-04-01
Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisationmore » techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k ≤1 h Mpc{sup −1} and to within three percent at k ≤10 h Mpc{sup −1}. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k ≤ 2 h Mpc{sup −1}. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L =0.5 h {sup −1}Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M {sub p}=10{sup 9} h {sup −1}M{sub ⊙} is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.« less
Association between split selection instability and predictive error in survival trees.
Radespiel-Tröger, M; Gefeller, O; Rabenstein, T; Hothorn, T
2006-01-01
To evaluate split selection instability in six survival tree algorithms and its relationship with predictive error by means of a bootstrap study. We study the following algorithms: logrank statistic with multivariate p-value adjustment without pruning (LR), Kaplan-Meier distance of survival curves (KM), martingale residuals (MR), Poisson regression for censored data (PR), within-node impurity (WI), and exponential log-likelihood loss (XL). With the exception of LR, initial trees are pruned by using split-complexity, and final trees are selected by means of cross-validation. We employ a real dataset from a clinical study of patients with gallbladder stones. The predictive error is evaluated using the integrated Brier score for censored data. The relationship between split selection instability and predictive error is evaluated by means of box-percentile plots, covariate and cutpoint selection entropy, and cutpoint selection coefficients of variation, respectively, in the root node. We found a positive association between covariate selection instability and predictive error in the root node. LR yields the lowest predictive error, while KM and MR yield the highest predictive error. The predictive error of survival trees is related to split selection instability. Based on the low predictive error of LR, we recommend the use of this algorithm for the construction of survival trees. Unpruned survival trees with multivariate p-value adjustment can perform equally well compared to pruned trees. The analysis of split selection instability can be used to communicate the results of tree-based analyses to clinicians and to support the application of survival trees.
Hao, Jifu; Fang, Xinsheng; Zhou, Yanfang; Wang, Jianzhu; Guo, Fengguang; Li, Fei; Peng, Xinsheng
2011-01-01
The purpose of the present study was to optimize a solid lipid nanoparticle (SLN) of chloramphenicol by investigating the relationship between design factors and experimental data using response surface methodology. A Box-Behnken design was constructed using solid lipid (X(1)), surfactant (X(2)), and drug/lipid ratio (X(3)) level as independent factors. SLN was successfully prepared by a modified method of melt-emulsion ultrasonication and low temperature-solidification technique using glyceryl monostearate as the solid lipid, and poloxamer 188 as the surfactant. The dependent variables were entrapment efficiency (EE), drug loading (DL), and turbidity. Properties of SLN such as the morphology, particle size, zeta potential, EE, DL, and drug release behavior were investigated, respectively. As a result, the nanoparticle designed showed nearly spherical particles with a mean particle size of 248 nm. The polydispersity index of particle size was 0.277 ± 0.058 and zeta potential was -8.74 mV. The EE (%) and DL (%) could reach up to 83.29% ± 1.23% and 10.11% ± 2.02%, respectively. In vitro release studies showed a burst release at the initial stage followed by a prolonged release of chloramphenicol from SLN up to 48 hours. The release kinetics of the optimized formulation best fitted the Peppas-Korsmeyer model. These results indicated that the chloramphenicol-loaded SLN could potentially be exploited as a delivery system with improved drug entrapment efficiency and controlled drug release.
Hao, Jifu; Fang, Xinsheng; Zhou, Yanfang; Wang, Jianzhu; Guo, Fengguang; Li, Fei; Peng, Xinsheng
2011-01-01
The purpose of the present study was to optimize a solid lipid nanoparticle (SLN) of chloramphenicol by investigating the relationship between design factors and experimental data using response surface methodology. A Box-Behnken design was constructed using solid lipid (X1), surfactant (X2), and drug/lipid ratio (X3) level as independent factors. SLN was successfully prepared by a modified method of melt-emulsion ultrasonication and low temperature-solidification technique using glyceryl monostearate as the solid lipid, and poloxamer 188 as the surfactant. The dependent variables were entrapment efficiency (EE), drug loading (DL), and turbidity. Properties of SLN such as the morphology, particle size, zeta potential, EE, DL, and drug release behavior were investigated, respectively. As a result, the nanoparticle designed showed nearly spherical particles with a mean particle size of 248 nm. The polydispersity index of particle size was 0.277 ± 0.058 and zeta potential was −8.74 mV. The EE (%) and DL (%) could reach up to 83.29% ± 1.23% and 10.11% ± 2.02%, respectively. In vitro release studies showed a burst release at the initial stage followed by a prolonged release of chloramphenicol from SLN up to 48 hours. The release kinetics of the optimized formulation best fitted the Peppas–Korsmeyer model. These results indicated that the chloramphenicol-loaded SLN could potentially be exploited as a delivery system with improved drug entrapment efficiency and controlled drug release. PMID:21556343
Sharma, Deepak; Maheshwari, Dipika; Rana, Ravish; Bhatia, Shanu; Singh, Manisha; Gabrani, Reema; Sharma, Sanjeev K.; Ali, Javed; Sharma, Rakesh Kumar; Dang, Shweta
2014-01-01
The aim of the present study was to optimize lorazepam loaded PLGA nanoparticles (Lzp-PLGA-NPs) by investigating the effect of process variables on the response using Box-Behnken design. Effect of four independent factors, that is, polymer, surfactant, drug, and aqueous/organic ratio, was studied on two dependent responses, that is, z-average and % drug entrapment. Lzp-PLGA-NPs were successfully developed by nanoprecipitation method using PLGA as polymer, poloxamer as surfactant and acetone as organic phase. NPs were characterized for particle size, zeta potential, % drug entrapment, drug release behavior, TEM, and cell viability. Lzp-PLGA-NPs were characterized for drug polymer interaction using FTIR. The developed NPs showed nearly spherical shape with z-average 167–318 d·nm, PDI below 0.441, and −18.4 mV zeta potential with maximum % drug entrapment of 90.1%. In vitro drug release behavior followed Korsmeyer-Peppas model and showed initial burst release of 21.7 ± 1.3% with prolonged drug release of 69.5 ± 0.8% from optimized NPs up to 24 h. In vitro drug release data was found in agreement with ex vivo permeation data through sheep nasal mucosa. In vitro cell viability study on Vero cell line confirmed the safety of optimized NPs. Optimized Lzp-PLGA-NPs were radiolabelled with Technitium-99m for scintigraphy imaging and biodistribution studies in Sprague-Dawley rats to establish nose-to-brain pathway. PMID:25126544
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality.
Bishara, Anthony J; Hittner, James B
2015-10-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box-Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples ( n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated.
Rapid Optical Follow-up Observations of SGR Events with ROTSE-I
NASA Astrophysics Data System (ADS)
Akerlof, C.; Balsano, R.; Barthelmy, S.; Bloch, J.; Butterworth, P.; Casperson, D.; Cline, T.; Fletcher, S.; Gisler, G.; Hills, J.; Kehoe, R.; Lee, B.; Marshall, S.; McKay, T.; Pawl, A.; Priedhorsky, W.; Seldomridge, N.; Szymanski, J.; Wren, J.
2000-10-01
In order to observe nearly simultaneous emission from gamma-ray bursts (GRBs), the Robotic Optical Transient Search Experiment (ROTSE) receives triggers via the GRB Coordinates Network (GCN). Since beginning operations in 1998 March, ROTSE has also taken useful data for 10 SGR events: eight from SGR 1900+14 and two from SGR 1806-20. We have searched for new or variable sources in the error regions of these SGRs, and no optical counterparts were observed. Limits are in the range mROTSE~12.5-15.5 during the period 20 s to 1 hr after the observed SGR events.
Wagner, Andreas; Rosen, William
2014-01-01
Innovations in biological evolution and in technology have many common features. Some of them involve similar processes, such as trial and error and horizontal information transfer. Others describe analogous outcomes such as multiple independent origins of similar innovations. Yet others display similar temporal patterns such as episodic bursts of change separated by periods of stasis. We review nine such commonalities, and propose that the mathematical concept of a space of innovations, discoveries or designs can help explain them. This concept can also help demolish a persistent conceptual wall between technological and biological innovation. PMID:24850903
Performance of biometric quality measures.
Grother, Patrick; Tabassi, Elham
2007-04-01
We document methods for the quantitative evaluation of systems that produce a scalar summary of a biometric sample's quality. We are motivated by a need to test claims that quality measures are predictive of matching performance. We regard a quality measurement algorithm as a black box that converts an input sample to an output scalar. We evaluate it by quantifying the association between those values and observed matching results. We advance detection error trade-off and error versus reject characteristics as metrics for the comparative evaluation of sample quality measurement algorithms. We proceed this with a definition of sample quality, a description of the operational use of quality measures. We emphasize the performance goal by including a procedure for annotating the samples of a reference corpus with quality values derived from empirical recognition scores.
X-ray observations of the burst source MXB 1728 - 34
NASA Technical Reports Server (NTRS)
Basinska, E. M.; Lewin, W. H. G.; Sztajno, M.; Cominsky, L. R.; Marshall, F. J.
1984-01-01
Where sufficient information has been obtained, attention is given to the maximum burst flux, integrated burst flux, spectral hardness, rise time, etc., of 96 X-ray bursts observed from March 1976 to March 1979. The integrated burst flux and the burst frequency appear to be correlated; the longer the burst interval, the larger the integrated burst flux, as expected on the basis of simple thermonuclear flash models. The maximum burst flux and the integrated burst flux are strongly correlated; for low flux levels their dependence is approximately linear, while for increasing values of the integrated burst flux, the flux at burst maximum saturates and reaches a plateau.
NASA Technical Reports Server (NTRS)
Berg, Wesley; Avery, Susan K.
1995-01-01
Estimates of monthly rainfall have been computed over the tropical Pacific using passive microwave satellite observations from the special sensor microwave/imager (SSM/I) for the period from July 1987 through December 1990. These monthly estimates are calibrated using data from a network of Pacific atoll rain gauges in order to account for systematic biases and are then compared with several visible and infrared satellite-based rainfall estimation techniques for the purpose of evaluating the performance of the microwave-based estimates. Although several key differences among the various techniques are observed, the general features of the monthly rainfall time series agree very well. Finally, the significant error sources contributing to uncertainties in the monthly estimates are examined and an estimate of the total error is produced. The sampling error characteristics are investigated using data from two SSM/I sensors and a detailed analysis of the characteristics of the diurnal cycle of rainfall over the oceans and its contribution to sampling errors in the monthly SSM/I estimates is made using geosynchronous satellite data. Based on the analysis of the sampling and other error sources the total error was estimated to be of the order of 30 to 50% of the monthly rainfall for estimates averaged over 2.5 deg x 2.5 deg latitude/longitude boxes, with a contribution due to diurnal variability of the order of 10%.
NASA Astrophysics Data System (ADS)
Yang, Jing; Reichert, Peter; Abbaspour, Karim C.; Yang, Hong
2007-07-01
SummaryCalibration of hydrologic models is very difficult because of measurement errors in input and response, errors in model structure, and the large number of non-identifiable parameters of distributed models. The difficulties even increase in arid regions with high seasonal variation of precipitation, where the modelled residuals often exhibit high heteroscedasticity and autocorrelation. On the other hand, support of water management by hydrologic models is important in arid regions, particularly if there is increasing water demand due to urbanization. The use and assessment of model results for this purpose require a careful calibration and uncertainty analysis. Extending earlier work in this field, we developed a procedure to overcome (i) the problem of non-identifiability of distributed parameters by introducing aggregate parameters and using Bayesian inference, (ii) the problem of heteroscedasticity of errors by combining a Box-Cox transformation of results and data with seasonally dependent error variances, (iii) the problems of autocorrelated errors, missing data and outlier omission with a continuous-time autoregressive error model, and (iv) the problem of the seasonal variation of error correlations with seasonally dependent characteristic correlation times. The technique was tested with the calibration of the hydrologic sub-model of the Soil and Water Assessment Tool (SWAT) in the Chaohe Basin in North China. The results demonstrated the good performance of this approach to uncertainty analysis, particularly with respect to the fulfilment of statistical assumptions of the error model. A comparison with an independent error model and with error models that only considered a subset of the suggested techniques clearly showed the superiority of the approach based on all the features (i)-(iv) mentioned above.
Evaluation Of Statistical Models For Forecast Errors From The HBV-Model
NASA Astrophysics Data System (ADS)
Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.
2009-04-01
Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.
Lanning, Sharon K; Best, Al M; Temple, Henry J; Richards, Philip S; Carey, Allison; McCauley, Laurie K
2006-02-01
Accurate and consistent radiographic interpretation among clinical instructors is needed for assessment of teaching, student performance, and patient care. The purpose of this investigation was to determine if the method of radiographic viewing affects accuracy and consistency of instructors' determinations of bone loss. Forty-one clinicians who provide instruction in a dental school clinical teaching program (including periodontists, general dentists, periodontal graduate students, and dental hygienists) quantified bone loss for up to twenty-five teeth into four descriptive categories using a view box for plain film viewing or a projection system for digitized image viewing. Ratings were compared to the correct category as determined by direct measurement using the Schei ruler. Agreement with the correct choice for the view box and projection system was 70.2 percent and 64.5 percent, respectively. The mean difference was better for a projection system due to small rater error by graduate students. Projection system ratings were slightly less consistent than view box ratings. Dental hygiene faculty ratings were the most consistent but least accurate. Although the projection system resulted in slightly reduced accuracy and consistency among instructors, training sessions utilizing a single method for projecting digitized radiographic images have their advantages and may positively influence dental education and patient care by enhancing accuracy and consistency of radiographic interpretation among instructors.
Mouratidou, T; Miguel, M L; Androutsos, O; Manios, Y; De Bourdeaudhuij, I; Cardon, G; Kulaga, Z; Socha, P; Galcheva, S; Iotova, V; Payr, A; Koletzko, B; Moreno, L A
2014-08-01
The ToyBox-intervention is a kindergarten-based, family-involved intervention targeting multiple lifestyle behaviours in preschool children, their teachers and their families. This intervention was conducted in six European countries, namely Belgium, Bulgaria, Germany, Greece, Poland and Spain. The aim of this paper is to provide a descriptive overview of the harmonization and standardization procedures of the baseline and follow-up evaluation of the study (and substudies). Steps related to the study's operational, standardization and harmonization procedures as well as the impact and outcome evaluation assessment tools used are presented. Experiences from the project highlight the importance of safeguarding the measurement process to minimize data heterogeneity derived from potential measurement error and country-by-country differences. In addition, it was made clear that continuing quality control and support is an important component of such studies. For this reason, well-supported communication channels, such as regular email updates and teleconferences, and regular internal and external meetings to ensure smooth and accurate implementation were in place during the study. The ToyBox-intervention and its harmonized and standardized procedures can serve as a successful case study for future studies evaluating the efficacy of similar interventions. © 2014 World Obesity.
1986-10-01
BUZO, and FEDERICO KUHLMANN, Universidad Nacional Autdnoma de Mixico, Facultad de Ingenieria , Divisidn Estudios de Posgrado, P.O. Box 70-256, 04510...unsuccess- ful in this area for a long time. It was felt, e.g., in the voiceband modem industry , that the coding gains achievable by error-correction coding...without bandwidth expansion or data rate reduction, when compared to uncoded modulation. The concept was quickly adopted by industry , and is now becoming
2012-03-22
Power Amplifier (7). A power amplifier was required to drive the actuators. For this research a Trek , Inc. Model PZD 700 Dual Channel Amplifier was used...while the flight test amplifier was being built. The Trek amplifier was capable of amplifying 32 Figure 3.19: dSpace MicroAutoBox II Digital...averaging of 25% was used to reduce the errors caused by noise but still maintain accuracy. For the laboratory Trek amplifier, a 100 millivolt input
Web Syndication in a Multilevel Security Environment
2012-03-01
law/blog/03/08/131502.html</link> ... <securitylabel><label>SECRET</label></securitylabel> </item> <item> <title>Excellent Banana Bread Recipe</title...only feed available). 7. Click ‘Unsubscribe from Selected Feeds’ to delete the feed. 50 8. The page should refresh , showing your list of feeds is empty...box. 7. Click ‘Subscribe to Feed,’ to add the feed. 8. The page should refresh , showing an error message from the MLS NEWS READER. 9. Click ‘View
Achieving High Reliability in Histology: An Improvement Series to Reduce Errors.
Heher, Yael K; Chen, Yigu; Pyatibrat, Sergey; Yoon, Edward; Goldsmith, Jeffrey D; Sands, Kenneth E
2016-11-01
Despite sweeping medical advances in other fields, histology processes have by and large remained constant over the past 175 years. Patient label identification errors are a known liability in the laboratory and can be devastating, resulting in incorrect diagnoses and inappropriate treatment. The objective of this study was to identify vulnerable steps in the histology workflow and reduce the frequency of labeling errors (LEs). In this 36-month study period, a numerical step key (SK) was developed to capture LEs. The two most prevalent root causes were targeted for Lean workflow redesign: manual slide printing and microtome cutting. The numbers and rates of LEs before and after interventions were compared to evaluate the effectiveness of interventions. Following the adoption of a barcode-enabled laboratory information system, the error rate decreased from a baseline of 1.03% (794 errors in 76,958 cases) to 0.28% (107 errors in 37,880 cases). After the implementation of an innovative ice tool box, allowing single-piece workflow for histology microtome cutting, the rate came down to 0.22% (119 errors in 54,342 cases). The study pointed out the importance of tracking and understanding LEs by using a simple numerical SK and quantified the effectiveness of two customized Lean interventions. Overall, a 78.64% reduction in LEs and a 35.28% reduction in time spent on rework have been observed since the study began. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Box codes of lengths 48 and 72
NASA Technical Reports Server (NTRS)
Solomon, G.; Jin, Y.
1993-01-01
A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.
BURST AND OUTBURST CHARACTERISTICS OF MAGNETAR 4U 0142+61
DOE Office of Scientific and Technical Information (OSTI.GOV)
Göğüş, Ersin; Chakraborty, Manoneeta; Kaneko, Yuki
2017-01-20
We have compiled the most comprehensive burst sample from magnetar 4U 0142+61, comprising 27 bursts from its three burst-active episodes in 2011, 2012 and the latest one in 2015 observed with Swift /Burst Alert Telescope and Fermi /Gamma-ray Burst Monitor. Bursts from 4U 0142+61 morphologically resemble typical short bursts from other magnetars. However, 4U 0142+61 bursts are less energetic compared to the bulk of magnetar bursts. We uncovered an extended tail emission following a burst on 2015 February 28, with a thermal nature, cooling over a timescale of several minutes. During this tail emission, we also uncovered pulse peak phasemore » aligned X-ray bursts, which could originate from the same underlying mechanism as that of the extended burst tail, or an associated and spatially coincident but different mechanism.« less
Liquid crystal polymer substrate MMIC receiver modules for the ECE Imaging system on the DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Y.; Ye, Y.; Yu, J-H
A new generation of millimeter-wave heterodyne imaging receiver arrays has been developed and demonstrated on the DIII-D ECEI system. Improved circuit integration, allowing for absolute calibration, improved noise performance, and shielding from out-of-band emission, is made possible by using advanced liquid crystal polymer (LCP) substrates and MMIC (Monolithic Microwave Integrated Circuit) receiver chips. This array exhibits ~ 15 dB additional gain and > 30x reduction in noise temperature compared to the previous generation and provide ECEI capability for absolute 2-D electron temperature profile measurements. Each LCP horn-waveguide module houses a 3x3 mm GaAs MMIC receiver chip, which consists of amore » low noise amplifier (LNA), balanced mixer, local oscillator multiplier chain driven by ~12 GHz input via an RF cable to the enclosure box, and IF amplifier. A proof-of-principle instrument with 5 poloidal channels was installed on DIII-D in 2017. The full proof-of-principle system installation (20 poloidal x 8 radial channels) was commissioned early in 2018. The LCP ECEI system is used for pedestal region measurements, especially focusing on temperature evolution during ELM bursting. The DIII-D ECE Imaging signal has been significantly improved with extremely effective shielding of out-of-band microwave noise which plagued previous ECE Imaging studies on DIII-D. In H-mode ELM bursting, the radial propagation of electron heat flow has been detected on DIII-D. The LCP ECE Imaging is expected to be a valuable diagnostic tool for ELM physics investigations.« less
Jiang, Ying; Zhang, Xuemei; Mu, Hongjie; Hua, Hongchen; Duan, Dongyu; Yan, Xiuju; Wang, Yiyun; Meng, Qingqing; Lu, Xiaoyan; Wang, Aiping; Liu, Wanhui; Li, Youxin; Sun, Kaoxiang
2018-11-01
A microsphere-gel in situ forming implant (MS-Gel ISFI) dual-controlled drug delivery system was applied to a high water-soluble small-molecule compound Rasagiline mesylate (RM) for effective treatment of Parkinson's disease. This injectable complex depot system combined an in situ phase transition gel with high drug-loading and encapsulation efficiency RM-MS prepared by a modified emulsion-phase separation method and optimized by Box-Behnken design. It was evaluated for in vitro drug release, in vivo pharmacokinetics, and in vivo pharmacodynamics. We found that the RM-MS-Gel ISFI system showed no initial burst release and had a long period of in vitro drug release (60 days). An in vivo pharmacokinetic study indicated a significant reduction (p < .01) in the initial high plasma drug concentration of the RM-MS-Gel ISFI system compared to that of the single RM-MS and RM-in situ gel systems after intramuscular injection to rats. A pharmacodynamic study demonstrated a significant reduction (p < .05) in 6-hydroxydopamine-induced contralateral rotation behavior and an effective improvement (p < .05) in dopamine levels in the striatum of the lesioned side after 28 days in animals treated with the RM-MS-Gel ISFI compared with that of animals treated with saline. MS-embedded in situ phase transition gel is superior for use as a biodegradable and injectable sustained drug delivery system with a low initial burst and long period of drug release for highly hydrophilic small molecule drugs.
Vaugoyeau, Marie; Adriaensen, Frank; Artemyev, Alexandr; Bańbura, Jerzy; Barba, Emilio; Biard, Clotilde; Blondel, Jacques; Bouslama, Zihad; Bouvier, Jean-Charles; Camprodon, Jordi; Cecere, Francesco; Charmantier, Anne; Charter, Motti; Cichoń, Mariusz; Cusimano, Camillo; Czeszczewik, Dorota; Demeyrier, Virginie; Doligez, Blandine; Doutrelant, Claire; Dubiec, Anna; Eens, Marcel; Eeva, Tapio; Faivre, Bruno; Ferns, Peter N; Forsman, Jukka T; García-Del-Rey, Eduardo; Goldshtein, Aya; Goodenough, Anne E; Gosler, Andrew G; Grégoire, Arnaud; Gustafsson, Lars; Harnist, Iga; Hartley, Ian R; Heeb, Philipp; Hinsley, Shelley A; Isenmann, Paul; Jacob, Staffan; Juškaitis, Rimvydas; Korpimäki, Erkki; Krams, Indrikis; Laaksonen, Toni; Lambrechts, Marcel M; Leclercq, Bernard; Lehikoinen, Esa; Loukola, Olli; Lundberg, Arne; Mainwaring, Mark C; Mänd, Raivo; Massa, Bruno; Mazgajski, Tomasz D; Merino, Santiago; Mitrus, Cezary; Mönkkönen, Mikko; Morin, Xavier; Nager, Ruedi G; Nilsson, Jan-Åke; Nilsson, Sven G; Norte, Ana C; Orell, Markku; Perret, Philippe; Perrins, Christopher M; Pimentel, Carla S; Pinxten, Rianne; Richner, Heinz; Robles, Hugo; Rytkönen, Seppo; Senar, Juan Carlos; Seppänen, Janne T; Pascoal da Silva, Luis; Slagsvold, Tore; Solonen, Tapio; Sorace, Alberto; Stenning, Martyn J; Tryjanowski, Piotr; von Numers, Mikael; Walankiewicz, Wieslaw; Møller, Anders Pape
2016-08-01
The increase in size of human populations in urban and agricultural areas has resulted in considerable habitat conversion globally. Such anthropogenic areas have specific environmental characteristics, which influence the physiology, life history, and population dynamics of plants and animals. For example, the date of bud burst is advanced in urban compared to nearby natural areas. In some birds, breeding success is determined by synchrony between timing of breeding and peak food abundance. Pertinently, caterpillars are an important food source for the nestlings of many bird species, and their abundance is influenced by environmental factors such as temperature and date of bud burst. Higher temperatures and advanced date of bud burst in urban areas could advance peak caterpillar abundance and thus affect breeding phenology of birds. In order to test whether laying date advance and clutch sizes decrease with the intensity of urbanization, we analyzed the timing of breeding and clutch size in relation to intensity of urbanization as a measure of human impact in 199 nest box plots across Europe, North Africa, and the Middle East (i.e., the Western Palearctic) for four species of hole-nesters: blue tits (Cyanistes caeruleus), great tits (Parus major), collared flycatchers (Ficedula albicollis), and pied flycatchers (Ficedula hypoleuca). Meanwhile, we estimated the intensity of urbanization as the density of buildings surrounding study plots measured on orthophotographs. For the four study species, the intensity of urbanization was not correlated with laying date. Clutch size in blue and great tits does not seem affected by the intensity of urbanization, while in collared and pied flycatchers it decreased with increasing intensity of urbanization. This is the first large-scale study showing a species-specific major correlation between intensity of urbanization and the ecology of breeding. The underlying mechanisms for the relationships between life history and urbanization remain to be determined. We propose that effects of food abundance or quality, temperature, noise, pollution, or disturbance by humans may on their own or in combination affect laying date and/or clutch size.
A structural and kinetic study on myofibrils prevented from shortening by chemical cross-linking.
Herrmann, C; Sleep, J; Chaussepied, P; Travers, F; Barman, T
1993-07-20
In previous work, we studied the early steps of the Mg(2+)-ATPase activity of Ca(2+)-activated myofibrils [Houadjeto, M., Travers, F., & Barman, T. (1992) Biochemistry 31, 1564-1569]. The myofibrils were free to contract, and the results obtained refer to the ATPase cycle of myofibrils contracting with no external load. Here we studied the ATPase of myofibrils contracting isometrically. To prevent shortening, we cross-linked them with 1-ethyl-3-[3-(dimethylamino)propyl]carbodiimide (EDC). SDS-PAGE and Western blot analyses showed that the myosin rods were extensively cross-linked and that 8% of the myosin heads were cross-linked to the thin filament. The transient kinetics of the cross-linked myofibrils were studied in 0.1 M potassium acetate, pH 7.4 and 4 degrees C, by the rapid-flow quench method. The ATP binding steps were studied by the cold ATP chase and the cleavage and release of products steps by the Pi burst method. In Pi burst experiments, the sizes of the bursts were equal within experimental error to the ATPase site concentrations (as determined by the cold ATP chase methods) for both cross-linked (isometric) and un-cross-linked (isotonic) myofibrils. This shows that in both cases the rate-limiting step is after the cleavage of ATP. When cross-linked, the kcat of Ca(2+)-activated myofibrils was reduced from 1.7 to 0.8 s-1. This is consistent with the observation that fibers shortening at moderate velocity have a higher ATPase activity than isometric fibers.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Helvik, B. E.; Stol, N.
1995-04-01
A reference measurement scenario is defined, where an ATM switch (OCTOPUS) is offered traffic from three source types representing the traffic resulting from typical services to be carried by an ATM network. These are high quality video (HQTV), high speed data (HSD) and constant bitrate transfer (CBR). In addition to be typical, these have widely different characteristics. Detailed definitions for these, and other actual source types, are made and entered into the Synthetic Traffic Generator (STG) database. Recommended traffic mixes of these sources are also made. Based on the above, laboratory measurements are carried out to study how the various kinds of traffic influence each other, how fairly the loss is distributed over services and connections, and what are the loss characteristics experienced. (Due to a software error detected in the measurement equipment after the work was concluded, the measurements are carried out with a HSD source with a load less 'aggressive' than intended.) The main findings are: Cell loss is very unfairly distributed among the various connections. During a loss burst, which occurs less frequently than the duration of a typical connection, affects mainly one or a few connections; Cell loss is unfairly distributed among the services. The ratios in the range from HSD: HQTV: CBR = 5 : 1 : 0.85 are observed, and unfairness increases with decreasing load burstiness; The loss characteristics vary during a loss burst, from one burst to the next and between services. Hence, it does not seem feasible to use 'typical-loss-statistics' to study the impairments on various services. In addition some supplementing work is reported.
The Dark Side of Nature: the Crime was Almost Perfect
NASA Astrophysics Data System (ADS)
2006-12-01
Nature has again thrown astronomers for a loop. Just when they thought they understood how gamma-ray bursts formed, they have uncovered what appears to be evidence for a new kind of cosmic explosion. These seem to arise when a newly born black hole swallows most of the matter from its doomed parent star. Gamma-ray bursts (GRBs), the most powerful explosions in the Universe, signal the formation of a new black hole and come in two flavours, long and short ones. In recent years, international efforts have shown that long gamma-ray bursts are linked with the explosive deaths of massive stars (hypernovae; see e.g. ESO PR 16/03). ESO PR Photo 49a/06 ESO PR Photo 49a/06 GRB 060614 (FORS/VLT) Last year, observations by different teams - including the GRACE and MISTICI collaborations that use ESO's telescopes - of the afterglows of two short gamma-ray bursts provided the first conclusive evidence that this class of objects most likely originates from the collision of compact objects: neutron stars or black holes (see ESO PR 26/05 and ESO PR 32/05). The newly found gamma-ray bursts, however, do not fit the picture. They instead seem to share the properties of both the long and short classes. "Some unknown process must be at play, about which we have presently no clue," said Massimo Della Valle of the Osservatorio Astrofisico di Arcetri in Firenze, Italy, lead author of one of the reports published in this week's issue of the journal Nature. "Either it is a new kind of merger which is able to produce long bursts, or a new kind of stellar explosion in which matter can't escape the black hole." One of the mysterious events went bang on 14 June 2006, hence its name, GRB 060614. The gamma-ray burst lasted 102 seconds and belongs clearly to the category of long GRBs. As it happened in a relatively close-by galaxy, located only 1.6 billion light-years away in the constellation Indus, astronomers worldwide eagerly pointed their telescopes toward it to capture the supernova, watching and waiting as if for a jack-in-the-box to spring open. The MISTICI collaboration used ESO's Very Large Telescope to follow the burst for 50 days. "Despite our deep monitoring, no rebrightening due to a supernova was seen," said Gianpiero Tagliaferri from the Observatory of Brera, Italy and member of the team. "If a supernova is present, if should at least be 100 times fainter than any other supernova usually associated with a long burst." The burst exploded in a dwarf galaxy that shows moderate signs of star formation. Thus young, massive stars are present and, at the end of its life one of them could have uttered this long, agonising cry before vanishing into a black hole. "Why did it do so in a dark way, with no sign of a supernova?" asked Guido Chincarini, from the University of Milano-Bicocca, Italy, also member of the team. "A possibility is that a massive black hole formed that did not allow any matter to escape. All the material that is usually ejected in a supernova explosion would then fall back and be swallowed." ESO PR Photo 49c/06 ESO PR Photo 49b/06 GRB 060505 (FORS/VLT) The same conclusion was previously reached by another team, who monitored both GRB 060614 and another burst, GRB 060505 (5 May 2006) for 5 and 12 weeks, respectively. For this, they used the ESO VLT and the 1.54-m Danish telescope at La Silla. GRB 060505 was a faint burst with a duration of 4 seconds, and as such also belongs to the category of long bursts [1]. For GRB 060505, the astronomers could only see the burst in visible light for one night and then it faded away, while for GRB 060614, they could only follow it for four nights after the burst. Thus, if supernovae were associated with these long-bursts, as one would have expected, they must have been about a hundred times fainter than a normal supernova. "Although both bursts are long, the remarkable conclusion from our monitoring is that there were no supernovae associated with them," said Johan Fynbo from the DARK Cosmology Centre at the Niels Bohr Institute of the Copenhagen University in Denmark, who led the study. "It is a bit like not hearing the thunder from a nearby storm when one could see a very long lasting flash." For the May burst, the team has obtained deep images in very good observing conditions allowing the exact localisation of the burst in its host galaxy. The host galaxy turns out to be a small spiral galaxy, and the burst occurred in a compact star-forming region in one of the spiral arms of the galaxy. This is strong evidence that the star that made the GRB was massive [2]. "For the 5 May event, we have evidence that it was due to a massive star that died without making a supernova," said Fynbo. "We now have to find out what is the fraction of massive stars that die without us noticing, that is, without producing either a gamma-ray burst or a supernova." "Whatever the solution to the problem is, it is clear that these new results challenge the commonly accepted scenario, in which long bursts are associated with a bright supernova," said Daniele Malesani, from the International School for Advanced Studies in Trieste, and now also at the DARK Cosmology Centre. "Our hope is to be able to find more of these unconventional bursts. The chase is on!" High resolution images and their captions are available on the associated page. More information The two gamma-ray bursts were discovered with the NASA/ASI/PPARC Swift satellite, which is dedicated to the discovery of these powerful explosions. The work presented here is published in the 21 December 2006 issue of the journal Nature: "No supernovae associated with two long-duration gamma-ray bursts", by Johan P. U. Fynbo et al., and "An enigmatic long-lasting gamma-ray burst not accompanied by a bright supernova", by Massimo Della Valle et al. Two other reports about the same events are published in the same issue of Nature. The Italian-led team - the MISTICI collaboration - is composed of Massimo Della Valle (INAF, Osservatorio Astrofisico di Arcetri, Italy), Guido Chincarini (INAF, Osservatorio Astronomico di Brera & Università degli Studi di Milano-Bicocca, Italy), Nino Panagia (Space Telescope Science Institute, USA), Gianpiero Tagliaferri, Dino Fugazza, Sergio Campana, Stefano Covino, and Paolo D'Avanzo (INAF, Osservatorio Astronomico di Brera, Italy), Daniele Malesani (SISSA/ISAS, Italy and Dark Cosmology Centre, Copenhagen), Vincenzo Testa, L. Angelo Antonelli, Silvia Piranomonte, and Luigi Stella (INAF, Osservatorio Astronomico di Roma, Italy), Vanessa Mangano (INAF/IASF Palermo, Italy), Kevin Hurley (University of California, Berkeley, USA), I. Felix Mirabel (ESO), and Leonardo J. Pellizza (Instituto de Astronomia y Fisica del Espacio). The Danish-led team is composed of Johan P. U. Fynbo, Darach Watson, Christina C. Thöne, Tamara M. Davis, Jens Hjorth, José Mará Castro Cerón, Brian L. Jensen, Maximilian D. Stritzinger, and Dong Xu (Dark Cosmology Centre, University of Copenhagen, Denmark), Jesper Sollerman (Dark Cosmology Centre and Department of Astronomy, Stockholm University, Sweden), Uffe G. Jørgensen, Tobias C. Hinse, and Kristian G. Woller (Niels Bohr Institute, University of Copenhagen), Joshua S. Bloom, Daniel Kocevski, Daniel Perley (Department of Astronomy, University of California at Berkeley, USA), Páll Jakobsson (Centre for Astrophysics Research, University of Hertfordshire, UK), John F. Graham and Andrew S. Fruchter (Space Telescope Science Institute, Baltimore, USA), David Bersier (Astrophysics Research Institute, Liverpool John Moores University, UK), Lisa Kewley (University of Hawaii, Institute of Astronomy, USA), Arnaud Cassan and Marta Zub (Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg, Germany), Suzanne Foley (School of Physics, University College Dublin, Ireland), Javier Gorosabel (Instituto de Astrofisica de Andalucia, Granada, Spain), Keith D. Horne (SUPA Physics/Astronomy, University of St Andrews, Scotland, UK), Sylvio Klose (Thüringer Landessternwarte Tautenburg, Germany), Jean-Baptiste Marquette (Institut d'Astrophysique de Paris, France), Enrico Ramirez-Ruiz (Institute for Advanced Study, Princeton and Department of Astronomy and Astrophysics, University of California, Santa Cruz, USA), Paul M. Vreeswijk (ESO and Departamento de Astronomia, Universidad de Chile, Santiago, Chile), and Ralph A. M. Wijers (Astronomical Institute 'Anton Pannekoek', University of Amsterdam, The Netherlands).
Heterogeneity in Short Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Norris, Jay P.; Gehrels Neil; Scargle, Jeffrey D.
2011-01-01
We analyze the Swift/BAT sample of short gamma-ray bursts, using an objective Bayesian Block procedure to extract temporal descriptors of the bursts' initial pulse complexes (IPCs). The sample comprises 12 and 41 bursts with and without extended emission (EE) components, respectively. IPCs of non-EE bursts are dominated by single pulse structures, while EE bursts tend to have two or more pulse structures. The medians of characteristic timescales - durations, pulse structure widths, and peak intervals - for EE bursts are factors of approx 2-3 longer than for non-EE bursts. A trend previously reported by Hakkila and colleagues unifying long and short bursts - the anti-correlation of pulse intensity and width - continues in the two short burst groups, with non-EE bursts extending to more intense, narrower pulses. In addition we find that preceding and succeeding pulse intensities are anti-correlated with pulse interval. We also examine the short burst X-ray afterglows as observed by the Swift/XRT. The median flux of the initial XRT detections for EE bursts (approx 6 X 10(exp -10) erg / sq cm/ s) is approx > 20 x brighter than for non-EE bursts, and the median X-ray afterglow duration for EE bursts (approx 60,000 s) is approx 30 x longer than for non-EE bursts. The tendency for EE bursts toward longer prompt-emission timescales and higher initial X-ray afterglow fluxes implies larger energy injections powering the afterglows. The longer-lasting X-ray afterglows of EE bursts may suggest that a significant fraction explode into more dense environments than non-EE bursts, or that the sometimes-dominant EE component efficiently p()wers the afterglow. Combined, these results favor different progenitors for EE and non-EE short bursts.
Securebox: a multibiopsy sample container for specimen identification and transport.
Palmieri, Beniamino; Sblendorio, Valeriana; Saleh, Farid; Al-Sebeih, Khalid
2008-01-01
To describe an original multicompartment disposable container for tissue surgical specimens or serial biopsy samples (Securebox). The increasing number of pathology samples from a single patient required for an accurate diagnosis led us to design and manufacture a unique container with 4 boxes; in each box 1 or more biopsy samples can be lodged. A magnification lens on a convex segment of the plastic framework allows inspection of macroscopic details of the recovered specimens. We investigated 400 randomly selected cases (compared with 400 controls) who underwent multiple biopsies from January 2006 to January 2007 to evaluate compliance with the new procedure and detect errors resulting from missing some of the multiple specimens or to technical mistakes during the procedure or delivery that might have compromised the final diagnosis. Using our Securebox, the percentage of oatients whose diagnosis failed or could not be reached was O.5% compared to 4% with the traditional method (p = 0.0012). Moreover, the percentage of medical and nursing staff who were satisfied with the Securebox compared to the traditional methodwas 85% vs. 15%, respectively (p < 0.0001). The average number of days spent bto reach a proper diagnosis based on the usage of the Securebox was 3.38 +/- 1.16 SD compared to 6.76 +/- 0.52 SD with the traditional method (p < 0.0001). The compact Securebox makes it safer and easier to introduce the specimens and to ship them to the pathology laboratories, reducing the risk of error.
RXTE and BeppoSAX Observations of the Transient X-ray Pulsar XTE J 18591+083
NASA Technical Reports Server (NTRS)
Corbet, R. H. D.; intZand, J. J. M.; Levine, A. M.; Marshall, F. E.
2008-01-01
We present observations of the 9.8 s X-ray pulsar XTE J159+083 made with the All-Sky Monitor (ASM) and Proportional Counter Array (PCA) on board the Rossi X-ray Timing Explorer (RXTE), and the Wide Field Cameras (WFC) on board BeppoSAX. The ASM data cover a 12 year time interval and show that an extended outburst occurred between approximately MJD50, 250, and 50, 460 (1996 June 16 to 1997 January 12). The ASM data excluding this outburst interval suggest a possible 61 day modulation. Eighteen sets of PCA observations were obtained over an approx. one month interval in 1999. The flux variability measured with the PCA appears consistent with the possible period found with the ASM. The PCA measurements of the pulse period showed it to decrease non-monotonically and then to increase significantly. Doppler shifts due to orbital motion rather than accretion torques appear to be better able to explain the pulse period changes. Observations with the WFC during the extended outburst give an error box which is consistent with a previously determined PCA error box but is significantly smaller. The transient nature of XTE J1859+083 and the length of its pulse period are consistent with it being a Be/neutral star binary. The possible 61 day orbital period would be of the expected length for a Be star system with a 9.8 s pulse period.
A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes
Vogl, Gregory W.; Weiss, Brian A.; Donmez, M. Alkan
2017-01-01
A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a ‘sensor box’ to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality. PMID:28691039
Wang, Junmei; Hou, Tingjun
2011-01-01
In this work, we have evaluated how well the General AMBER force field (GAFF) performs in studying the dynamic properties of liquids. Diffusion coefficients (D) have been predicted for 17 solvents, 5 organic compounds in aqueous solutions, 4 proteins in aqueous solutions, and 9 organic compounds in non-aqueous solutions. An efficient sampling strategy has been proposed and tested in the calculation of the diffusion coefficients of solutes in solutions. There are two major findings of this study. First of all, the diffusion coefficients of organic solutes in aqueous solution can be well predicted: the average unsigned error (AUE) and the root-mean-square error (RMSE) are 0.137 and 0.171 ×10−5 cm−2s−1, respectively. Second, although the absolute values of D cannot be predicted, good correlations have been achieved for 8 organic solvents with experimental data (R2 = 0.784), 4 proteins in aqueous solutions (R2 = 0.996) and 9 organic compounds in non-aqueous solutions (R2 = 0.834). The temperature dependent behaviors of three solvents, namely, TIP3P water, dimethyl sulfoxide (DMSO) and cyclohexane have been studied. The major MD settings, such as the sizes of simulation boxes and with/without wrapping the coordinates of MD snapshots into the primary simulation boxes have been explored. We have concluded that our sampling strategy that averaging the mean square displacement (MSD) collected in multiple short-MD simulations is efficient in predicting diffusion coefficients of solutes at infinite dilution. PMID:21953689
Pulse homodyne field disturbance sensor
McEwan, Thomas E.
1997-01-01
A field disturbance sensor operates with relatively low power, provides an adjustable operating range, is not hypersensitive at close range, allows co-location of multiple sensors, and is inexpensive to manufacture. The sensor includes a transmitter that transmits a sequence of transmitted bursts of electromagnetic energy. The transmitter frequency is modulated at an intermediate frequency. The sequence of bursts has a burst repetition rate, and each burst has a burst width and comprises a number of cycles at a transmitter frequency. The sensor includes a receiver which receives electromagnetic energy at the transmitter frequency, and includes a mixer which mixes a transmitted burst with reflections of the same transmitted burst to produce an intermediate frequency signal. Circuitry, responsive to the intermediate frequency signal indicates disturbances in the sensor field. Because the mixer mixes the transmitted burst with reflections of the transmitted burst, the burst width defines the sensor range. The burst repetition rate is randomly or pseudo-randomly modulated so that bursts in the sequence of bursts have a phase which varies. A second range-defining mode transmits two radio frequency bursts, where the time spacing between the bursts defines the maximum range divided by two.
Pulse homodyne field disturbance sensor
McEwan, T.E.
1997-10-28
A field disturbance sensor operates with relatively low power, provides an adjustable operating range, is not hypersensitive at close range, allows co-location of multiple sensors, and is inexpensive to manufacture. The sensor includes a transmitter that transmits a sequence of transmitted bursts of electromagnetic energy. The transmitter frequency is modulated at an intermediate frequency. The sequence of bursts has a burst repetition rate, and each burst has a burst width and comprises a number of cycles at a transmitter frequency. The sensor includes a receiver which receives electromagnetic energy at the transmitter frequency, and includes a mixer which mixes a transmitted burst with reflections of the same transmitted burst to produce an intermediate frequency signal. Circuitry, responsive to the intermediate frequency signal indicates disturbances in the sensor field. Because the mixer mixes the transmitted burst with reflections of the transmitted burst, the burst width defines the sensor range. The burst repetition rate is randomly or pseudo-randomly modulated so that bursts in the sequence of bursts have a phase which varies. A second range-defining mode transmits two radio frequency bursts, where the time spacing between the bursts defines the maximum range divided by two. 12 figs.
Range-gated field disturbance sensor with range-sensitivity compensation
McEwan, T.E.
1996-05-28
A field disturbance sensor operates with relatively low power, provides an adjustable operating range, is not hypersensitive at close range, allows co-location of multiple sensors, and is inexpensive to manufacture. The sensor includes a transmitter that transmits a sequence of transmitted bursts of electromagnetic energy. The transmitter frequency is modulated at an intermediate frequency. The sequence of bursts has a burst repetition rate, and each burst has a burst width and comprises a number of cycles at a transmitter frequency. The sensor includes a receiver which receives electromagnetic energy at the transmitter frequency, and includes a mixer which mixes a transmitted burst with reflections of the same transmitted burst to produce an intermediate frequency signal. Circuitry, responsive to the intermediate frequency signal indicates disturbances in the sensor field. Because the mixer mixes the transmitted burst with reflections of the transmitted burst, the burst width defines the sensor range. The burst repetition rate is randomly or pseudorandomly modulated so that bursts in the sequence of bursts have a phase which varies. 8 figs.
Range-gated field disturbance sensor with range-sensitivity compensation
McEwan, Thomas E.
1996-01-01
A field disturbance sensor operates with relatively low power, provides an adjustable operating range, is not hypersensitive at close range, allows co-location of multiple sensors, and is inexpensive to manufacture. The sensor includes a transmitter that transmits a sequence of transmitted bursts of electromagnetic energy. The transmitter frequency is modulated at an intermediate frequency. The sequence of bursts has a burst repetition rate, and each burst has a burst width and comprises a number of cycles at a transmitter frequency. The sensor includes a receiver which receives electromagnetic energy at the transmitter frequency, and includes a mixer which mixes a transmitted burst with reflections of the same transmitted burst to produce an intermediate frequency signal. Circuitry, responsive to the intermediate frequency signal indicates disturbances in the sensor field. Because the mixer mixes the transmitted burst with reflections of the transmitted burst, the burst width defines the sensor range. The burst repetition rate is randomly or pseudorandomly modulated so that bursts in the sequence of bursts have a phase which varies.
Quantum Devices Bonded Beneath a Superconducting Shield: Part 2
NASA Astrophysics Data System (ADS)
McRae, Corey Rae; Abdallah, Adel; Bejanin, Jeremy; Earnest, Carolyn; McConkey, Thomas; Pagel, Zachary; Mariantoni, Matteo
The next-generation quantum computer will rely on physical quantum bits (qubits) organized into arrays to form error-robust logical qubits. In the superconducting quantum circuit implementation, this architecture will require the use of larger and larger chip sizes. In order for on-chip superconducting quantum computers to be scalable, various issues found in large chips must be addressed, including the suppression of box modes (due to the sample holder) and the suppression of slot modes (due to fractured ground planes). By bonding a metallized shield layer over a superconducting circuit using thin-film indium as a bonding agent, we have demonstrated proof of concept of an extensible circuit architecture that holds the key to the suppression of spurious modes. Microwave characterization of shielded transmission lines and measurement of superconducting resonators were compared to identical unshielded devices. The elimination of box modes was investigated, as well as bond characteristics including bond homogeneity and the presence of a superconducting connection.
Neff, Michael; Rauhut, Guntram
2014-02-05
Multidimensional potential energy surfaces obtained from explicitly correlated coupled-cluster calculations and further corrections for high-order correlation contributions, scalar relativistic effects and core-correlation energy contributions were generated in a fully automated fashion for the double-minimum benchmark systems OH3(+) and NH3. The black-box generation of the potentials is based on normal coordinates, which were used in the underlying multimode expansions of the potentials and the μ-tensor within the Watson operator. Normal coordinates are not the optimal choice for describing double-minimum potentials and the question remains if they can be used for accurate calculations at all. However, their unique definition is an appealing feature, which removes remaining errors in truncated potential expansions arising from different choices of curvilinear coordinate systems. Fully automated calculations are presented, which demonstrate, that the proposed scheme allows for the determination of energy levels and tunneling splittings as a routine application. Copyright © 2013 Elsevier B.V. All rights reserved.
Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.
2018-04-01
The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.
Optical spectroscopic followup of XMMSL1 J164303.7+653253 in the error box of IGR J16426+6536
NASA Astrophysics Data System (ADS)
Parisi, P.; Masetti, N.; Malizia, A.; Morelli, L.; Mason, E.; Dean, A. J.; Ubertini, P.
2008-10-01
We report on a spectroscopic observation of the optical object USNO-A2.0 1500-06133361 (with J2000 coordinates RA = 16 43 04.07, Dec = +65 32 50.9 and magnitude R ~ 18.9) inside the error circle of the XMM-Newton slew source XMMLS1 J164303.7+653253 (see Ibarra et al., ATel #1397), possibly associated with the unidentified INTEGRAL source IGR J16426+6536 (Bird et al. 2007, ApJS, 170, 175). The observations were performed on 2008 February 04, starting at 06:10 UT, with DOLORES, a focal reducer instrument installed on the 3.58m Telescopio Nazionale Galileo (TNG) in the Astronomical Observatory of Roque de Los Muchachos (Santa Cruz de La Palma, Spain), for a total exposure time of 1800 s.
Heterogeneity in Short Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Norris, Jay P.; Gehrels, Neil; Scargle, Jeffrey D.
2011-07-01
We analyze the Swift/BAT sample of short gamma-ray bursts, using an objective Bayesian Block procedure to extract temporal descriptors of the bursts' initial pulse complexes (IPCs). The sample is comprised of 12 and 41 bursts with and without extended emission (EE) components, respectively. IPCs of non-EE bursts are dominated by single pulse structures, while EE bursts tend to have two or more pulse structures. The medians of characteristic timescales—durations, pulse structure widths, and peak intervals—for EE bursts are factors of ~2-3 longer than for non-EE bursts. A trend previously reported by Hakkila and colleagues unifying long and short bursts—the anti-correlation of pulse intensity and width—continues in the two short burst groups, with non-EE bursts extending to more intense, narrower pulses. In addition, we find that preceding and succeeding pulse intensities are anti-correlated with pulse interval. We also examine the short burst X-ray afterglows as observed by the Swift/X-Ray Telescope (XRT). The median flux of the initial XRT detections for EE bursts (~6×10-10 erg cm-2 s-1) is gsim20× brighter than for non-EE bursts, and the median X-ray afterglow duration for EE bursts (~60,000 s) is ~30× longer than for non-EE bursts. The tendency for EE bursts toward longer prompt-emission timescales and higher initial X-ray afterglow fluxes implies larger energy injections powering the afterglows. The longer-lasting X-ray afterglows of EE bursts may suggest that a significant fraction explode into denser environments than non-EE bursts, or that the sometimes-dominant EE component efficiently powers the afterglow. Combined, these results favor different progenitors for EE and non-EE short bursts.
Schoellhamer, D.H.; Ganju, N.K.; Mineart, P.R.; Lionberger, M.A.; Kusuda, T.; Yamanishi, H.; Spearman, J.; Gailani, J. Z.
2008-01-01
Bathymetric change in tidal environments is modulated by watershed sediment yield, hydrodynamic processes, benthic composition, and anthropogenic activities. These multiple forcings combine to complicate simple prediction of bathymetric change; therefore, numerical models are necessary to simulate sediment transport. Errors arise from these simulations, due to inaccurate initial conditions and model parameters. We investigated the response of bathymetric change to initial conditions and model parameters with a simplified zero-dimensional cohesive sediment transport model, a two-dimensional hydrodynamic/sediment transport model, and a tidally averaged box model. The zero-dimensional model consists of a well-mixed control volume subjected to a semidiurnal tide, with a cohesive sediment bed. Typical cohesive sediment parameters were utilized for both the bed and suspended sediment. The model was run until equilibrium in terms of bathymetric change was reached, where equilibrium is defined as less than the rate of sea level rise in San Francisco Bay (2.17 mm/year). Using this state as the initial condition, model parameters were perturbed 10% to favor deposition, and the model was resumed. Perturbed parameters included, but were not limited to, maximum tidal current, erosion rate constant, and critical shear stress for erosion. Bathymetric change was most sensitive to maximum tidal current, with a 10% perturbation resulting in an additional 1.4 m of deposition over 10 years. Re-establishing equilibrium in this model required 14 years. The next most sensitive parameter was the critical shear stress for erosion; when increased 10%, an additional 0.56 m of sediment was deposited and 13 years were required to re-establish equilibrium. The two-dimensional hydrodynamic/sediment transport model was calibrated to suspended-sediment concentration, and despite robust solution of hydrodynamic conditions it was unable to accurately hindcast bathymetric change. The tidally averaged box model was calibrated to bathymetric change data and shows rapidly evolving bathymetry in the first 10-20 years, though sediment supply and hydrodynamic forcing did not vary greatly. This initial burst of bathymetric change is believed to be model adjustment to initial conditions, and suggests a spin-up time of greater than 10 years. These three diverse modeling approaches reinforce the sensitivity of cohesive sediment transport models to initial conditions and model parameters, and highlight the importance of appropriate calibration data. Adequate spin-up time of the order of years is required to initialize models, otherwise the solution will contain bathymetric change that is not due to environmental forcings, but rather improper specification of initial conditions and model parameters. Temporally intensive bathymetric change data can assist in determining initial conditions and parameters, provided they are available. Computational effort may be reduced by selectively updating hydrodynamics and bathymetry, thereby allowing time for spin-up periods. reserved.
Correlations of Prompt and Afterglow Emission in Swift Long and Short Gamma Ray Bursts
NASA Technical Reports Server (NTRS)
Gehrel, Neil; Barthelmy, S. d.; Burrows, D. N.; Cannizzo, J. K.; Chincarini, G.; Feinmore, E.; Kouveliotou, C.; O'Brien, P.; Palmer, D. M.; Racusin, J.;
2008-01-01
Correlation studies of prompt and afterglow emissions from gamma-ray bursts (GRBs) between different spectral bands has been difficult to do in the past because few bursts had comprehensive and intercomparable afterglow measurements. In this paper we present a large and uniform data set for correlation analysis based on bursts detected by the Swift mission. For the first time, short and long bursts can be analyzed and compared. It is found for both classes that the optical, X-ray and gamma-ray emissions are linearly correlated, but with a large spread about the correlation line; stronger bursts tend to have brighter afterglows, and bursts with brighter X-ray afterglow tend to have brighter optical afterglow. Short bursts are, on average, weaker in both prompt and afterglow emissions. No short bursts are seen with extremely low optical to X-ray ratio as occurs for 'dark' long bursts. Although statistics are still poor for short bursts, there is no evidence yet for a subgroup of short bursts with high extinction as there is for long bursts. Long bursts are detected in the dark category at the same fraction as for pre-Swift bursts. Interesting cases are discovered of long bursts that are detected in the optical, and yet have low enough optical to X-ray ratio to be classified as dark. For the prompt emission, short and long bursts have different average tracks on flux vs fluence plots. In Swift, GRB detections tend to be fluence limited for short bursts and flux limited for long events.
Abujadi, Caio; Croarkin, Paul E; Bellini, Bianca B; Brentani, Helena; Marcolin, Marco A
2017-12-11
Theta-burst stimulation (TBS) modulates synaptic plasticity more efficiently than standard repetitive transcranial magnetic stimulation delivery and may be a promising modality for neuropsychiatric disorders such as autism spectrum disorder (ASD). At present there are few effective interventions for prefrontal cortex dysfunction in ASD. We report on an open-label, pilot study of intermittent TBS (iTBS) to target executive function deficits and restricted, repetitive behaviors in male children and adolescents with ASD. Ten right-handed, male participants, aged 9-17 years with ASD were enrolled in an open-label trial of iTBS treatment. Fifteen sessions of neuronavigated iTBS at 100% motor threshold targeting the right dorsolateral prefrontal cortex were delivered over 3 weeks. Parent report scores on the Repetitive Behavior Scale Revised and the Yale-Brown Obsessive Compulsive Scale demonstrated improvements with iTBS treatment. Participants demonstrated improvements in perseverative errors on the Wisconsin Card Sorting Test and total time for the Stroop test. The iTBS treatments were well tolerated with no serious adverse effects. These preliminary results suggest that further controlled interventional studies of iTBS for ASD are warranted.
Fast radio bursts as a cosmic probe?
NASA Astrophysics Data System (ADS)
Zhou, Bei; Li, Xiang; Wang, Tao; Fan, Yi-Zhong; Wei, Da-Ming
2014-05-01
We discuss the possibility of using fast radio bursts (FRBs)—if cosmological—as a viable cosmic probe. We find that the contribution of the host galaxies to the detected dispersion measures can be inapparent for the FRBs that are not from galaxy centers or star-forming regions. The inhomogeneity of the intergalactic medium (IGM), however, causes significant deviation of the dispersion measure from that predicted in the simplified homogeneous IGM model for an individual event. Fortunately, with sufficient FRBs along different sightlines but within a very narrow redshift interval (e.g., Δz ˜0.05), the mean obtained from averaging observed dispersion measures does not suffer such a problem and hence may be used as a cosmic probe. We show that in the optimistic case (e.g., about 20 FRBs in each Δz have been measured; the most distant FRBs were at redshift ≥3; the host galaxies and the FRB sources contribute little to the detected dispersion measures) and with all the uncertainties (i.e., the inhomogeneity of the IGM, the contribution and uncertainty of host galaxies, and the evolution and error of fIGM) considered, FRBs could help constrain the equation of state of dark energy.
NASA Technical Reports Server (NTRS)
Starling, R. L. C.; Wijers, R. A. M. J.; Wiersema, K.; Rol, E.; Curran, P. A.; Kouveliotou, C.; vanderHorst, A. J.; Heemskerk, M. H. M.
2006-01-01
We use a new approach to obtain limits on the absorbing columns towards an initial sample of 10 long Gamma-Ray Bursts observed with BeppoSAX and selected on the basis of their good optical and nIR coverage, from simultaneous fits to nIR, optical and X-ray afterglow data, in count space and including the effects of metallicity. In no cases is a MIV-like ext,inction preferred, when testing MW, LMC and SMC extinction laws. The 2175A bump would in principle be detectable in all these afterglows, but is not present in the data. An SMC-like gas-to-dust ratio or lower value can be ruled out for 4 of the hosts analysed here (assuming Sh4C metallicity and extinction law) whilst the remainder of the sample have too large an error to discriminate. We provide a more accurate estimate of the line-of-sight extinction and improve upon the uncertainties for the majority of the extinction measurements made in previous studies of this sample. We discuss this method to determine extinction values in comparison with the most commonly employed existing methods.
NASA Technical Reports Server (NTRS)
Mccallister, R. D.; Crawford, J. J.
1981-01-01
It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.
Liu, Xiang; Effenberger, Frank; Chand, Naresh
2015-03-09
We demonstrate a flexible modulation and detection scheme for upstream transmission in passive optical networks using pulse position modulation at optical network unit, facilitating burst-mode detection with automatic decision threshold tracking, and DSP-enabled soft-combining at optical line terminal. Adaptive receiver sensitivities of -33.1 dBm, -36.6 dBm and -38.3 dBm at a bit error ratio of 10(-4) are respectively achieved for 2.5 Gb/s, 1.25 Gb/s and 625 Mb/s after transmission over a 20-km standard single-mode fiber without any optical amplification.
Evaluation of CDMA system capacity for mobile satellite system applications
NASA Technical Reports Server (NTRS)
Smith, Partrick O.; Geraniotis, Evaggelos A.
1988-01-01
A specific Direct-Sequence/Pseudo-Noise (DS/PN) Code-Division Multiple-Access (CDMA) mobile satellite system (MSAT) architecture is discussed. The performance of this system is evaluated in terms of the maximum number of active MSAT subscribers that can be supported at a given uncoded bit-error probability. The evaluation decouples the analysis of the multiple-access capability (i.e., the number of instantaneous user signals) from the analysis of the multiple-access mutliplier effect allowed by the use of CDMA with burst-modem operation. We combine the results of these two analyses and present numerical results for scenarios of interest to the mobile satellite system community.
2011-01-01
Background Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Methods Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI®) for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Results Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. Conclusions The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects. PMID:21854614
Hou, Qingjiang; Mahnken, Jonathan D; Gajewski, Byron J; Dunton, Nancy
2011-08-19
Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI® for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects.
Reproducibility of the Internal Load and Performance-Based Responses to Simulated Amateur Boxing.
Thomson, Edward D; Lamb, Kevin L
2017-12-01
Thomson, ED and Lamb, KL. Reproducibility of the internal load and performance-based responses to simulated amateur boxing. J Strength Cond Res 31(12): 3396-3402, 2017-The aim of this study was to examine the reproducibility of the internal load and performance-based responses to repeated bouts of a three-round amateur boxing simulation protocol (boxing conditioning and fitness test [BOXFIT]). Twenty-eight amateur boxers completed 2 familiarization trials before performing 2 complete trials of the BOXFIT, separated by 4-7 days. To characterize the internal load, mean (HRmean) and peak (HRpeak) heart rate, breath-by-breath oxygen uptake (V[Combining Dot Above]O2), aerobic energy expenditure, excess carbon dioxide production (CO2excess), and ratings of perceived exertion were recorded throughout each round, and blood lactate determined post-BOXFIT. Additionally, an indication of the performance-based demands of the BOXFIT was provided by a measure of acceleration of the punches thrown in each round. Analyses revealed there were no significant differences (p > 0.05) between repeated trials in any round for all dependent measures. The typical error (coefficient variation %) for all but 1 marker of internal load (CO2excess) was 1.2-16.5% and reflected a consistency that was sufficient for the detection of moderate changes in variables owing to an intervention. The reproducibility of the punch accelerations was high (coefficient of variance % range = 2.1-2.7%). In general, these findings suggest that the internal load and performance-based efforts recorded during the BOXFIT are reproducible and, thereby, offer practitioners a method by which meaningful changes impacting on performance could be identified.
Properties of the Second Outburst of the Bursting Pulsar (GRO J1744-28) as Observed with BASTE
NASA Technical Reports Server (NTRS)
Woods, Peter M.; Kouveliotou, Chryssa; VanParadus, Jan; Briggs, Michael S.; Wilson, C. A.; Deal, Kim; Harmon, B. A.; Fishman, G. J.; Lewin, W. H. G.; Kommers, J.
1999-01-01
One year after its discovery, the Bursting Pulsar (GRO J1744-28) went into outburst again, displaying the hard X-ray bursts and pulsations that make this source unique. We report on BATSE (Burst and Transient Source Experiment) observations of both the persistent and burst emission for this second outburst and draw comparisons with the first. The second outburst was smaller than the first in both duration and peak luminosity. The persistent flux, burst peak flux, and burst fluence were all reduced in amplitude by a factor of approximately 1.7. Despite these differences, the two outbursts were very similar with respect to the burst occurrence rate, the durations and spectra of bursts, the absence of spectral evolution during bursts, and the evolution of the ratio alpha of average persistent to burst luminosity. Although no spectral evolution was found within individual bursts, we find evidence for a small (20%) variation of the spectral temperature during the course of the second outburst.
How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?
Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina
2015-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615
How does aging affect the types of error made in a visual short-term memory 'object-recall' task?
Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina
2014-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.
[Study on the ARIMA model application to predict echinococcosis cases in China].
En-Li, Tan; Zheng-Feng, Wang; Wen-Ce, Zhou; Shi-Zhu, Li; Yan, Lu; Lin, Ai; Yu-Chun, Cai; Xue-Jiao, Teng; Shun-Xian, Zhang; Zhi-Sheng, Dang; Chun-Li, Yang; Jia-Xu, Chen; Wei, Hu; Xiao-Nong, Zhou; Li-Guang, Tian
2018-02-26
To predict the monthly reported echinococcosis cases in China with the autoregressive integrated moving average (ARIMA) model, so as to provide a reference for prevention and control of echinococcosis. SPSS 24.0 software was used to construct the ARIMA models based on the monthly reported echinococcosis cases of time series from 2007 to 2015 and 2007 to 2014, respectively, and the accuracies of the two ARIMA models were compared. The model based on the data of the monthly reported cases of echinococcosis in China from 2007 to 2015 was ARIMA (1, 0, 0) (1, 1, 0) 12 , the relative error among reported cases and predicted cases was -13.97%, AR (1) = 0.367 ( t = 3.816, P < 0.001), SAR (1) = -0.328 ( t = -3.361, P = 0.001), and Ljung-Box Q = 14.119 ( df = 16, P = 0.590) . The model based on the data of the monthly reported cases of echinococcosis in China from 2007 to 2014 was ARIMA (1, 0, 0) (1, 0, 1) 12 , the relative error among reported cases and predicted cases was 0.56%, AR (1) = 0.413 ( t = 4.244, P < 0.001), SAR (1) = 0.809 ( t = 9.584, P < 0.001), SMA (1) = 0.356 ( t = 2.278, P = 0.025), and Ljung-Box Q = 18.924 ( df = 15, P = 0.217). The different time series may have different ARIMA models as for the same infectious diseases. It is needed to be further verified that the more data are accumulated, the shorter time of predication is, and the smaller the average of the relative error is. The establishment and prediction of an ARIMA model is a dynamic process that needs to be adjusted and optimized continuously according to the accumulated data, meantime, we should give full consideration to the intensity of the work related to infectious diseases reported (such as disease census and special investigation).
Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing
Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi
2015-01-01
Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195
NASA Astrophysics Data System (ADS)
Smith, Gennifer T.; Dwork, Nicholas; Khan, Saara A.; Millet, Matthew; Magar, Kiran; Javanmard, Mehdi; Bowden, Audrey K.
2017-03-01
Urinalysis dipsticks were designed to revolutionize urine-based medical diagnosis. They are cheap, extremely portable, and have multiple assays patterned on a single platform. They were also meant to be incredibly easy to use. Unfortunately, there are many aspects in both the preparation and the analysis of the dipsticks that are plagued by user error. This high error is one reason that dipsticks have failed to flourish in both the at-home market and in low-resource settings. Sources of error include: inaccurate volume deposition, varying lighting conditions, inconsistent timing measurements, and misinterpreted color comparisons. We introduce a novel manifold and companion software for dipstick urinalysis that eliminates the aforementioned error sources. A micro-volume slipping manifold ensures precise sample delivery, an opaque acrylic box guarantees consistent lighting conditions, a simple sticker-based timing mechanism maintains accurate timing, and custom software that processes video data captured by a mobile phone ensures proper color comparisons. We show that the results obtained with the proposed device are as accurate and consistent as a properly executed dip-and-wipe method, the industry gold-standard, suggesting the potential for this strategy to enable confident urinalysis testing. Furthermore, the proposed all-acrylic slipping manifold is reusable and low in cost, making it a potential solution for at-home users and low-resource settings.
Mechanical problem-solving strategies in left-brain damaged patients and apraxia of tool use.
Osiurak, François; Jarry, Christophe; Lesourd, Mathieu; Baumard, Josselin; Le Gall, Didier
2013-08-01
Left brain damage (LBD) can impair the ability to use familiar tools (apraxia of tool use) as well as novel tools to solve mechanical problems. Thus far, the emphasis has been placed on quantitative analyses of patients' performance. Nevertheless, the question still to be answered is, what are the strategies employed by those patients when confronted with tool use situations? To answer it, we asked 16 LBD patients and 43 healthy controls to solve mechanical problems by means of several potential tools. To specify the strategies, we recorded the time spent in performing four kinds of action (no manipulation, tool manipulation, box manipulation, and tool-box manipulation) as well as the number of relevant and irrelevant tools grasped. We compared LBD patients' performance with that of controls who encountered difficulties with the task (controls-) or not (controls+). Our results indicated that LBD patients grasped a higher number of irrelevant tools than controls+ and controls-. Concerning time allocation, controls+ and controls- spent significantly more time in performing tool-box manipulation than LBD patients. These results are inconsistent with the possibility that LBD patients could engage in trial-and-error strategies and, rather, suggest that they tend to be perplexed. These findings seem to indicate that the inability to reason about the objects' physical properties might prevent LBD patients from following any problem-solving strategy. Copyright © 2013 Elsevier Ltd. All rights reserved.
An Ephemeral Burst-Buffer File System for Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Teng; Moody, Adam; Yu, Weikuan
BurstFS is a distributed file system for node-local burst buffers on high performance computing systems. BurstFS presents a shared file system space across the burst buffers so that applications that use shared files can access the highly-scalable burst buffers without changing their applications.
Herringbone bursts associated with type II solar radio emission
NASA Technical Reports Server (NTRS)
Cairns, I. H.; Robinson, R. D.
1987-01-01
Detailed observations of the herringbone (HB) fine structure on type II solar radio bursts are presented. Data from the Culgoora radiospectrograph, radiometer and radioheliograph are analyzed. The characteristic spectral profiles, frequency drift rates and exciter velocities, fluxes, source sizes, brightness temperatures, and polarizations of individual HB bursts are determined. Correlations between individual bursts within the characteristic groups of bursts and the properties of the associated type II bursts are examined. These data are compatible with HB bursts being radiation at multiples of the plasma frequency generated by electron streams accelerated by the type II shock. HB bursts are physically distinct phenomena from type II and type III bursts, differing significantly in emission processes and/or source conditions; this conclusion indicates that many of the presently available theoretical ideas for HB bursts are incorrect.
Effects of Thermonuclear X-Ray Bursts on Non-burst Emissions in the Soft State of 4U 1728–34
NASA Astrophysics Data System (ADS)
Bhattacharyya, Sudip; Yadav, J. S.; Sridhar, Navin; Verdhan Chauhan, Jai; Agrawal, P. C.; Antia, H. M.; Pahari, Mayukh; Misra, Ranjeev; Katoch, Tilak; Manchanda, R. K.; Paul, Biswajit
2018-06-01
It has recently been shown that the persistent emission of a neutron star low-mass X-ray binary (LMXB) evolves during a thermonuclear (type-I) X-ray burst. The reason of this evolution, however, is not fully known. This uncertainty can introduce significant systematics in the neutron star radius measurement using burst spectra, particularly if an unknown but significant fraction of the burst emission, which is reprocessed, contributes to the changes in the persistent emission during the burst. Here, by analyzing individual burst data of AstroSat/LAXPC from the neutron star LMXB 4U 1728–34 in the soft state, we show that the burst emission is not significantly reprocessed by a corona covering the neutron star. Rather, our analysis suggests that the burst emission enhances the accretion disk emission, possibly by increasing the accretion rate via disk. This enhanced disk emission, which is Comptonized by a corona covering the disk, can explain an increased persistent emission observed during the burst. This finding provides an understanding of persistent emission components and their interaction with the thermonuclear burst emission. Furthermore, as burst photons are not significantly reprocessed, non-burst and burst emissions can be reliably separated, which is required to reduce systematic uncertainties in the stellar radius measurement.
Heuristic Algorithms for Solving Two Dimensional Loading Problems.
1981-03-01
L6i MICROCOPY RESOLUTION TEST CHART WTI0WAL BL4WA64OF STANDARDS- 1963-A -~~ le -I I ~- A-LA4C TEC1-NlCAL ’c:LJ? HEURISTIC ALGORITHMS FOR SOLVING...CONSIDER THE FOLLOWjING PROBLEM; ALLOCATE A SET OF ON’ DOXES, EACH HAVING A SPECIFIED LENGTH, WIDTH AND HEIGHT, TO A PALLET OF LENGTH " Le AND WIDTH "W...THE BOXES AND TI-EN-SELECT TI- lE BEST SOLUTION. SINCE THESE HEURISTICS ARE ESSENTIALLY A TRIAL AND ERROR PROCEDURE THEIR FORMULAS BECOME VERY
1993-03-01
Roast beef sandwich (on white bread), ham and Boxed Meal Contents During Desert Storm. cheese sandwich (on white bread), cherry drink, orange drink...The day to day variation in precision during the study, had the measured analytes exceeding the cutpoints set given as the coefficient of variation of...errors in the estimation of LDL cholesterol dure . (LDLc).Clin Chem, 1985,31:940(abs 239). Finally the blood HDLc concentration cannot be employed 9
Improved parameter inference in catchment models: 1. Evaluating parameter uncertainty
NASA Astrophysics Data System (ADS)
Kuczera, George
1983-10-01
A Bayesian methodology is developed to evaluate parameter uncertainty in catchment models fitted to a hydrologic response such as runoff, the goal being to improve the chance of successful regionalization. The catchment model is posed as a nonlinear regression model with stochastic errors possibly being both autocorrelated and heteroscedastic. The end result of this methodology, which may use Box-Cox power transformations and ARMA error models, is the posterior distribution, which summarizes what is known about the catchment model parameters. This can be simplified to a multivariate normal provided a linearization in parameter space is acceptable; means of checking and improving this assumption are discussed. The posterior standard deviations give a direct measure of parameter uncertainty, and study of the posterior correlation matrix can indicate what kinds of data are required to improve the precision of poorly determined parameters. Finally, a case study involving a nine-parameter catchment model fitted to monthly runoff and soil moisture data is presented. It is shown that use of ordinary least squares when its underlying error assumptions are violated gives an erroneous description of parameter uncertainty.
Packaging of pharmaceuticals: still too many dangers but several encouraging initiatives.
2007-06-01
(1) In 2006 in France, several drugs sold in poorly designed packaging exposed patients to a risk of serious adverse effects. (2) In 2006, Prescrire used a standardised methodology to examine the packaging of all new pharmaceutical products (656 different boxes) assessed in the New Products section of our French edition, la revue Prescrire. About 75% of these boxes contained tablets or capsules, mostly in blister packs. (3) Poor labelling remains a major problem. The international nonproprietary names (INN) is hard to spot on most boxes of patented brand-name drugs and is often overshadowed by the brand name. The primary packaging of many products does not even include the INN. (4) Two particularly ambiguous types of labelling are becoming more common on blister packs: pre-cut multiple-unit blister packs on which the labelling is truncated when a unit blister is removed; and blister packs on which the labelling spans two blisters, creating a risk of overdose. (5) The use of colours is frequently inappropriate. In particular, irrelevant information is often highlighted unnecessarily, while other, important information is barely visible. (6) Too many devices for oral administration create a risk of misuse. Very few are graduated in units of weight. Most are graduated in millilitres, obliging caregivers to use conversion charts and thus creating a risk of dosing errors. Devices graduated in kg bodyweight can also lead to dosing errors. (7) The labelling of some injectable drugs is barely legible. The various models of plastic ampoules, that are gradually replacing glass ampoules, can represent a danger because they resemble other plastic ampoules containing products administered by different routes. Packaging that does not provide a syringe or needle can cause problems for caregivers and represents another potential source of error. (8) Many of the patient information leaflets examined in 2006 had the same flaws as previously observed, i.e. uneven information quality, discrepancies between different sections, and out-of-date information. More and more French leaflets now include insets offering "Health Advice". There are better and worse examples, but there is no guarantee that they have been properly reviewed by the regulatory agency. (9) Increasingly drug boxes include pictograms, even though several studies have shown they are often difficult to interpret. And most boxes of generics also now include standard dosing schedules that are not always appropriate and may create a risk of dosing errors with potentially serious consequences. (10) 67 multidose bottles examined in 2006 had no childproof safety cap. Some contained psychotropics, which can have life-threatening effects if accidentally consumed in large amounts. (11) Some manufacturers have adopted realistic solutions to these problems. In particular, generics manufacturers again improved product labelling in 2006 (emphasis on the INN), appropriate use of colours for dose differentiation, and, encouragingly, far more Braille labels. (12) In 2006, the French regulatory agency introduced several measures aimed at improving the labelling of ampoules containing some injectable drugs. The impact of these measures was visible on several products marketed in 2006, including plastic vials of solutions for nebulization. (13) Several other examples of well-designed packaging were seen: safety devices on prefilled syringes; a childproof safety device; a tamperproof ring; unit-dose blister packs; clearly written patient leaflets; and the use of clear and appropriate symbols and pictograms. (14) In practice, in view of the large number of incidents recorded in 2006, and the plethora of packaging designs, caregivers should take time to analyse and discuss drug packaging. In this way, they will be in a position to distinguish between good and bad drug packaging, and to anticipate the risks associated with poorly designed packaging. (15) There are many ways in which drug regulatory authorities can help to ensure that drugs are sold in safe packaging. The French regulatory agency's work on the labelling of injectable drugs is an encouraging step. European Directive 2004/27/EC on medicines for human use provides for improvements in labelling (e.g. Braille) and patient information leaflets. Transposition of these measures into French law should lead to a number of improvements, provided the relevant regulations and guidelines place patients' interests first.
Effect of wear on the burst strength of l-80 steel casing
NASA Astrophysics Data System (ADS)
Irawan, S.; Bharadwaj, A. M.; Temesgen, B.; Karuppanan, S.; Abdullah, M. Z. B.
2015-12-01
Casing wear has recently become one of the areas of research interest in the oil and gas industry especially in extended reach well drilling. The burst strength of a worn out casing is one of the significantly affected mechanical properties and is yet an area where less research is done The most commonly used equations to calculate the resulting burst strength after wear are Barlow, the initial yield burst, the full yield burst and the rupture burst equations. The objective of this study was to estimate casing burst strength after wear through Finite Element Analysis (FEA). It included calculation and comparison of the different theoretical bursts pressures with the simulation results along with effect of different wear shapes on L-80 casing material. The von Misses stress was used in the estimation of the burst pressure. The result obtained shows that the casing burst strength decreases as the wear percentage increases. Moreover, the burst strength value of the casing obtained from the FEA has a higher value compared to the theoretical burst strength values. Casing with crescent shaped wear give the highest burst strength value when simulated under nonlinear analysis.
Bursts of seizures in long-term recordings of human focal epilepsy
Karoly, Philippa J.; Nurse, Ewan S.; Freestone, Dean R.; Ung, Hoameng; Cook, Mark J.; Boston, Ray
2017-01-01
Summary Objective We report on temporally clustered seizures detected from continuous long-term ambulatory human electroencephalographic data. The objective was to investigate short-term seizure clustering, which we have termed bursting, and consider implications for patient care, seizure prediction, and evaluating therapies. Methods Chronic ambulatory intracranial EEG data collected for the purpose of seizure prediction were annotated to identify seizure events. A detection algorithm was used to identify bursts of events. Burst events were compared to non-burst events to evaluate event dispersion, duration and dynamics. Results Bursts of seizures were present in six of fifteen patients, and detections were consistent over long term monitoring (> 2 years). Seizures within bursts are highly overdispersed compared to non-burst seizures. There was a complicated relationship between bursts and clinical seizures, although bursts were associated with multi-modal distributions of seizure duration, and poorer predictive outcomes. For three subjects, bursts demonstrated distinctive pre-ictal dynamics compared to clinical seizures. Significance We have previously hypothesized that there are distinct physiological pathways underlying short and long duration seizures. Here we show that burst seizures fall almost exclusively within the short population of seizure durations; however, a short duration was not sufficient to induce or imply bursting. We can therefore conclude that in addition to distinct mechanisms underlying seizure duration, there are separate factors regulating bursts of seizures. We show that bursts were a robust phenomenon in our patient cohort, which were consistent with overdispersed seizure rates, suggesting long-memory dynamics. PMID:28084639
Stimulus induced bursts in severe postanoxic encephalopathy.
Tjepkema-Cloostermans, Marleen C; Wijers, Elisabeth T; van Putten, Michel J A M
2016-11-01
To report on a distinct effect of auditory and sensory stimuli on the EEG in comatose patients with severe postanoxic encephalopathy. In two comatose patients admitted to the Intensive Care Unit (ICU) with severe postanoxic encephalopathy and burst-suppression EEG, we studied the effect of external stimuli (sound and touch) on the occurrence of bursts. In patient A bursts could be induced by either auditory or sensory stimuli. In patient B bursts could only be induced by touching different facial regions (forehead, nose and chin). When stimuli were presented with relatively long intervals, bursts persistently followed the stimuli, while stimuli with short intervals (<1s) did not induce bursts. In both patients bursts were not accompanied by myoclonia. Both patients deceased. Bursts in patients with a severe postanoxic encephalopathy can be induced by external stimuli, resulting in stimulus-dependent burst-suppression. Stimulus induced bursts should not be interpreted as prognostic favourable EEG reactivity. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
On-line estimation of error covariance parameters for atmospheric data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1995-01-01
A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.
NASA Technical Reports Server (NTRS)
Gavriil, Fotis P.; Kaspi, Victoria M.; Woods, Peter M.
2006-01-01
We report on the 2004 June 29 X-ray burst detected from the direction of the AXP 1E 1048.1-5937 using the RXTE. We find a simultaneous increase of approx. 3.5 times the quiescent value in the 2-10 keV pulsed flux of 1E 1048.1-5937 during the tail of the burst, which identifies the AXP as the burst s origin. The burst was overall very similar to the two others reported from the direction of this source in 2001. The unambiguous identification of 1E 1048.1-5937 as the burster here confirms that it was the origin of the 2001 bursts as well. The epoch of the burst peak was very close to the arrival time of 1E 1048.1-5937 s pulse peak. The burst exhibited significant spectral evolution, with the trend going from hard to soft. Although the average spectrum of the burst was comparable in hardness (Lambda approx. 1.6) to those,of the 2001 bursts, the peak of this burst was much harder (Lambda approx. 0.3). During the 11 days following the burst, the AXP was observed further with RXTE, XMM-Newton, and Chandra. Pre- and post-burst observations revealed no change in the total flux or spectrum of the quiescent emission. Comparing all three bursts detected thus far from this source, we find that this event was the most fluent (>3.3 x 10(exp-8 ergs/sq cm) in the 2-20 keV band), had the highest peak flux (59+/-9 x 10(exp -10)ergs/s/sq cm) in the 2-20 keV band), and had the longest duration (>699 s). The long duration of the burst difFerentiates it from SGR bursts, which have typical durations of approx.0.1 s. Bursts that occur preferentially at pulse maximum, have fast rises, and long X-tails containing the majority of the total burst energy have been seen uniquely from AXPs. The marked differences between AXP and SGRs bursts may provide new clues to help understand the physical differences between these objects.
Stimulus-dependent modulation of spike burst length in cat striate cortical cells.
DeBusk, B C; DeBruyn, E J; Snider, R K; Kabara, J F; Bonds, A B
1997-07-01
Burst activity, defined by groups of two or more spikes with intervals of < or = 8 ms, was analyzed in responses to drifting sinewave gratings elicited from striate cortical neurons in anesthetized cats. Bursting varied broadly across a population of 507 simple and complex cells. Half of this population had > or = 42% of their spikes contained in bursts. The fraction of spikes in bursts did not vary as a function of average firing rate and was stationary over time. Peaks in the interspike interval histograms were found at both 3-5 ms and 10-30 ms. In many cells the locations of these peaks were independent of firing rate, indicating a quantized control of firing behavior at two different time scales. The activity at the shorter time scale most likely results from intrinsic properties of the cell membrane, and that at the longer scale from recurrent network excitation. Burst frequency (bursts per s) and burst length (spikes per burst) both depended on firing rate. Burst frequency was essentially linear with firing rate, whereas burst length was a nonlinear function of firing rate and was also governed by stimulus orientation. At a given firing rate, burst length was greater for optimal orientations than for nonoptimal orientations. No organized orientation dependence was seen in bursts from lateral geniculate nucleus cells. Activation of cortical contrast gain control at low response amplitudes resulted in no burst length modulation, but burst shortening at optimal orientations was found in responses characterized by supersaturation. At a given firing rate, cortical burst length was shortened by microinjection of gamma-aminobutyric acid (GABA), and bursts became longer in the presence of N-methyl-bicuculline, a GABA(A) receptor blocker. These results are consistent with a model in which responses are reduced at nonoptimal orientations, at least in part, by burst shortening that is mediated by GABA. A similar mechanism contributes to response supersaturation at high contrasts via recruitment of inhibitory responses that are tuned to adjacent orientations. Burst length modulation can serve as a form of coding by supporting dynamic, stimulus-dependent reorganization of the effectiveness of individual network connections.
Demonstration of a viable quantitative theory for interplanetary type II radio bursts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, J. M., E-mail: jschmidt@physics.usyd.edu.au; Cairns, Iver H.
Between 29 November and 1 December 2013 the two widely separated spacecraft STEREO A and B observed a long lasting, intermittent, type II radio burst for the extended frequency range ≈ 4 MHz to 30 kHz, including an intensification when the shock wave of the associated coronal mass ejection (CME) reached STEREO A. We demonstrate for the first time our ability to quantitatively and accurately simulate the fundamental (F) and harmonic (H) emission of type II bursts from the higher corona (near 11 solar radii) to 1 AU. Our modeling requires the combination of data-driven three-dimensional magnetohydrodynamic simulations for the CME andmore » plasma background, carried out with the BATS-R-US code, with an analytic quantitative kinetic model for both F and H radio emission, including the electron reflection at the shock, growth of Langmuir waves and radio waves, and the radiations propagation to an arbitrary observer. The intensities and frequencies of the observed radio emissions vary hugely by factors ≈ 10{sup 6} and ≈ 10{sup 3}, respectively; the theoretical predictions are impressively accurate, being typically in error by less than a factor of 10 and 20 %, for both STEREO A and B. We also obtain accurate predictions for the timing and characteristics of the shock and local radio onsets at STEREO A, the lack of such onsets at STEREO B, and the z-component of the magnetic field at STEREO A ahead of the shock, and in the sheath. Very strong support is provided by these multiple agreements for the theory, the efficacy of the BATS-R-US code, and the vision of using type IIs and associated data-theory iterations to predict whether a CME will impact Earth’s magnetosphere and drive space weather events.« less
Demonstration of a viable quantitative theory for interplanetary type II radio bursts
NASA Astrophysics Data System (ADS)
Schmidt, J. M.; Cairns, Iver H.
2016-03-01
Between 29 November and 1 December 2013 the two widely separated spacecraft STEREO A and B observed a long lasting, intermittent, type II radio burst for the extended frequency range ≈ 4 MHz to 30 kHz, including an intensification when the shock wave of the associated coronal mass ejection (CME) reached STEREO A. We demonstrate for the first time our ability to quantitatively and accurately simulate the fundamental (F) and harmonic (H) emission of type II bursts from the higher corona (near 11 solar radii) to 1 AU. Our modeling requires the combination of data-driven three-dimensional magnetohydrodynamic simulations for the CME and plasma background, carried out with the BATS-R-US code, with an analytic quantitative kinetic model for both F and H radio emission, including the electron reflection at the shock, growth of Langmuir waves and radio waves, and the radiations propagation to an arbitrary observer. The intensities and frequencies of the observed radio emissions vary hugely by factors ≈ 106 and ≈ 103, respectively; the theoretical predictions are impressively accurate, being typically in error by less than a factor of 10 and 20 %, for both STEREO A and B. We also obtain accurate predictions for the timing and characteristics of the shock and local radio onsets at STEREO A, the lack of such onsets at STEREO B, and the z-component of the magnetic field at STEREO A ahead of the shock, and in the sheath. Very strong support is provided by these multiple agreements for the theory, the efficacy of the BATS-R-US code, and the vision of using type IIs and associated data-theory iterations to predict whether a CME will impact Earth's magnetosphere and drive space weather events.
Low-mass X-ray binary MAXI J1421-613 observed by MAXI GSC and Swift XRT
NASA Astrophysics Data System (ADS)
Serino, Motoko; Shidatsu, Megumi; Ueda, Yoshihiro; Matsuoka, Masaru; Negoro, Hitoshi; Yamaoka, Kazutaka; Kennea, Jamie A.; Fukushima, Kosuke; Nagayama, Takahiro
2015-04-01
Monitor of All sky X-ray Image (MAXI) discovered a new outburst of an X-ray transient source named MAXI J1421-613. Because of the detection of three X-ray bursts from the source, it was identified as a neutron star low-mass X-ray binary. The results of data analyses of the MAXI GSC (Gas Slit Camera) and the Swift XRT (X-Ray Telescope) follow-up observations suggest that the spectral hardness remained unchanged during the first two weeks of the outburst. All the XRT spectra in the 0.5-10 keV band can be well explained by thermal Comptonization of multi-color disk blackbody emission. The photon index of the Comptonized component is ≈ 2, which is typical of low-mass X-ray binaries in the low/hard state. Since X-ray bursts have a maximum peak luminosity, it is possible to estimate the (maximum) distance from its observed peak flux. The peak flux of the second X-ray burst, which was observed by the GSC, is about 5 photons cm-2 s-1. By assuming a blackbody spectrum of 2.5 keV, the maximum distance to the source is estimated as 7 kpc. The position of this source is contained by the large error regions of two bright X-ray sources detected with Orbiting Solar Observatory-7 (OSO-7) in the 1970s. Besides this, no past activities at the XRT position are reported in the literature. If MAXI J1421-613 is the same source as (one of) these, the outburst observed with MAXI may have occurred after a quiescence of 30-40 years.
GRB 060605: multi-wavelength analysis of the first GRB observed using integral field spectroscopy
NASA Astrophysics Data System (ADS)
Ferrero, P.; Klose, S.; Kann, D. A.; Savaglio, S.; Schulze, S.; Palazzi, E.; Maiorano, E.; Böhm, P.; Grupe, D.; Oates, S. R.; Sánchez, S. F.; Amati, L.; Greiner, J.; Hjorth, J.; Malesani, D.; Barthelmy, S. D.; Gorosabel, J.; Masetti, N.; Roth, M. M.
2009-04-01
The long and relatively faint gamma-ray burst GRB 060605 detected by Swift/BAT lasted about 20 s. Its afterglow could be observed with Swift/XRT for nearly 1 day, while Swift/UVOT could detect the afterglow during the first 6 h after the event. Here, we report on integral field spectroscopy of its afterglow performed with PMAS/PPak mounted at the Calar Alto 3.5 m telescope. In addition, we report on a detailed analysis of XRT and UVOT data and on the results of deep late-time VLT observations that reveal the GRB host galaxy. We find that the burst occurred at a redshift of z = 3.773, possibly associated with a faint, RC = 26.4 ± 0.3 host. Based on the optical and X-ray data, we deduce information on the SED of the afterglow, the position of the cooling frequency in the SED, the nature of the circumburst environment, its collimation factor, and its energetics. We find that the GRB fireball was expanding into a constant-density medium and that the explosion was collimated with a narrow half-opening angle of about 2.4 degrees. The initial Lorentz factor of the fireball was about 250; however, its beaming-corrected energy release in the gamma-ray band was comparably low. The optical, X-ray afterglow, on the other hand, was rather luminous. Finally, we find that the data are consistent within the error bars with an achromatic evolution of the afterglow during the suspected jet break time at about 0.27 days after the burst. Based on observations collected at the German-Spanish Calar Alto Observatory in Spain (Programme F06-3.5-055) and at the European Southern Observatory, La Silla and Paranal, Chile (ESO Programme 177.D-0591).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felker, B.; Allen, S.; Bell, H.
1993-10-06
The MTX explored the plasma heating effects of 140 GHz microwaves from both Gyrotrons and from the IMP FEL wiggler. The Gyrotron was long pulse length (0.5 seconds maximum) and the FEL produced short-pulse length, high-peak power, single and burst modes of 140 GHZ microwaves. Full-power operations of the IMP FEL wiggler were commenced in April of 1992 and continued into October of 1992. The Experimental Test Accelerator H (ETA-II) provided a 50-nanosecond, 6-MeV, 2--3 kAmp electron beam that was introduced co-linear into the IMP FEL with a 140 GHz Gyrotron master oscillator (MO). The FEL was able to amplifymore » the MO signal from approximately 7 kW to peaks consistently in the range of 1--2 GW. This microwave pulse was transmitted into the MTX and allowed the exploration of the linear and non-linear effects of short pulse, intense power in the MTX plasma. Single pulses were used to explore and gain operating experience in the parameter space of the IMP FEL, and finally evaluate transmission and absorption in the MTX. Single-pulse operations were repeatable. After the MTX was shut down burst-mode operations were successful at 2 kHz. This paper will describe the IMP FEL, Microwave Transmission System to MTX, the diagnostics used for calorimetric measurements, and the operations of the entire Microwave system. A discussion of correlated and uncorrelated errors that affect FEL performance will be made Linear and non-linear absorption data of the microwaves in the MTX plasma will be presented.« less
Advanced optical components for next-generation photonic networks
NASA Astrophysics Data System (ADS)
Yoo, S. J. B.
2003-08-01
Future networks will require very high throughput, carrying dominantly data-centric traffic. The role of Photonic Networks employing all-optical systems will become increasingly important in providing scalable bandwidth, agile reconfigurability, and low-power consumptions in the future. In particular, the self-similar nature of data traffic indicates that packet switching and burst switching will be beneficial in the Next Generation Photonic Networks. While the natural conclusion is to pursue Photonic Packet Switching and Photonic Burst Switching systems, there are significant challenges in realizing such a system due to practical limitations in optical component technologies. Lack of a viable all-optical memory technology will continue to drive us towards exploring rapid reconfigurability in the wavelength domain. We will introduce and discuss the advanced optical component technologies behind the Photonic Packet Routing system designed and demonstrated at UC Davis. The system is capable of packet switching and burst switching, as well as circuit switching with 600 psec switching speed and scalability to 42 petabit/sec aggregated switching capacity. By utilizing a combination of rapidly tunable wavelength conversion and a uniform-loss cyclic frequency (ULCF) arrayed waveguide grating router (AWGR), the system is capable of rapidly switching the packets in wavelength, time, and space domains. The label swapping module inside the Photonic Packet Routing system containing a Mach-Zehnder wavelength converter and a narrow-band fiber Bragg-grating achieves all-optical label swapping with optical 2R (potentially 3R) regeneration while maintaining optical transparency for the data payload. By utilizing the advanced optical component technologies, the Photonic Packet Routing system successfully demonstrated error-free, cascaded, multi-hop photonic packet switching and routing with optical-label swapping. This paper will review the advanced optical component technologies and their role in the Next Generation Photonic Networks.
NASA Astrophysics Data System (ADS)
Cao, An-ye; Dou, Lin-ming; Wang, Chang-bin; Yao, Xiao-xiao; Dong, Jing-yuan; Gu, Yu
2016-11-01
Identification of precursory characteristics is a key issue for rock burst prevention. The aim of this research is to provide a reference for assessing rock burst risk and determining potential rock burst risk areas in coal mining. In this work, the microseismic multidimensional information for the identification of rock bursts and spatial-temporal pre-warning was investigated in a specific coalface which suffered high rock burst risk in a mining area near a large residual coal pillar. Firstly, microseismicity evolution prior to a disastrous rock burst was qualitatively analysed, and the abnormal clustering of seismic sources, abnormal variations in daily total energy release, and event counts can be regarded as precursors to rock burst. Secondly, passive tomographic imaging has been used to locate high seismic activity zones and assess rock burst hazard when the coalface passes through residual pillar areas. The results show that high-velocity or velocity anomaly regions correlated well with strong seismic activities in future mining periods and that passive tomography has the potential to describe, both quantitatively and periodically, hazardous regions and assess rock burst risk. Finally, the bursting strain energy index was further used for short-term spatial-temporal pre-warning of rock bursts. The temporal sequence curve and spatial contour nephograms indicate that the status of the danger and the specific hazardous zones, and levels of rock burst risk can be quantitatively and rapidly analysed in short time and in space. The multidimensional precursory characteristic identification of rock bursts, including qualitative analysis, intermediate and short-time quantitative predictions, can guide the choice of measures implemented to control rock bursts in the field, and provides a new approach to monitor and forecast rock bursts in space and time.
NASA Technical Reports Server (NTRS)
Lin, Lin; Kouveliotou, Chryssa; Gogus, Ersin; van der Horst, Alexander J.; Watts, Anna L.; Baring, Matthew G.; Kaneko, Yuki; Wijers, Ralph A. M. J.; Woods, Peter M.; Barthelmy, Scott;
2011-01-01
SWift/BAT detected the first burst from 1E 1841-045 in May 2010 with intermittent burst activity recorded through at least July 2011. Here we present Swift and Fermi/GBM observations of this burst activity and search for correlated changes to the persistent X-ray emission of the source. The T90 durations of the bursts range between 18 - 140 ms, comparable to other magnetar burst durations, while the energy released in each burst ranges between (0.8-25) x 1038 erg, which is in the low side of SGR bursts. We find that the bursting activity did not have a significant effect on the persistent flux level of the source. We argue that the mechanism leading to this sporadic burst activity in IE 1841-045 might not involve large scale restructuring (either crustal or magnetospheric) as seen in other magnetar sources.
Robust control of burst suppression for medical coma
NASA Astrophysics Data System (ADS)
Westover, M. Brandon; Kim, Seong-Eun; Ching, ShiNung; Purdon, Patrick L.; Brown, Emery N.
2015-08-01
Objective. Medical coma is an anesthetic-induced state of brain inactivation, manifest in the electroencephalogram by burst suppression. Feedback control can be used to regulate burst suppression, however, previous designs have not been robust. Robust control design is critical under real-world operating conditions, subject to substantial pharmacokinetic and pharmacodynamic parameter uncertainty and unpredictable external disturbances. We sought to develop a robust closed-loop anesthesia delivery (CLAD) system to control medical coma. Approach. We developed a robust CLAD system to control the burst suppression probability (BSP). We developed a novel BSP tracking algorithm based on realistic models of propofol pharmacokinetics and pharmacodynamics. We also developed a practical method for estimating patient-specific pharmacodynamics parameters. Finally, we synthesized a robust proportional integral controller. Using a factorial design spanning patient age, mass, height, and gender, we tested whether the system performed within clinically acceptable limits. Throughout all experiments we subjected the system to disturbances, simulating treatment of refractory status epilepticus in a real-world intensive care unit environment. Main results. In 5400 simulations, CLAD behavior remained within specifications. Transient behavior after a step in target BSP from 0.2 to 0.8 exhibited a rise time (the median (min, max)) of 1.4 [1.1, 1.9] min; settling time, 7.8 [4.2, 9.0] min; and percent overshoot of 9.6 [2.3, 10.8]%. Under steady state conditions the CLAD system exhibited a median error of 0.1 [-0.5, 0.9]%; inaccuracy of 1.8 [0.9, 3.4]%; oscillation index of 1.8 [0.9, 3.4]%; and maximum instantaneous propofol dose of 4.3 [2.1, 10.5] mg kg-1. The maximum hourly propofol dose was 4.3 [2.1, 10.3] mg kg-1 h-1. Performance fell within clinically acceptable limits for all measures. Significance. A CLAD system designed using robust control theory achieves clinically acceptable performance in the presence of realistic unmodeled disturbances and in spite of realistic model uncertainty, while maintaining infusion rates within acceptable safety limits.
Robust control of burst suppression for medical coma
Westover, M Brandon; Kim, Seong-Eun; Ching, ShiNung; Purdon, Patrick L; Brown, Emery N
2015-01-01
Objective Medical coma is an anesthetic-induced state of brain inactivation, manifest in the electroencephalogram by burst suppression. Feedback control can be used to regulate burst suppression, however, previous designs have not been robust. Robust control design is critical under real-world operating conditions, subject to substantial pharmacokinetic and pharmacodynamic parameter uncertainty and unpredictable external disturbances. We sought to develop a robust closed-loop anesthesia delivery (CLAD) system to control medical coma. Approach We developed a robust CLAD system to control the burst suppression probability (BSP). We developed a novel BSP tracking algorithm based on realistic models of propofol pharmacokinetics and pharmacodynamics. We also developed a practical method for estimating patient-specific pharmacodynamics parameters. Finally, we synthesized a robust proportional integral controller. Using a factorial design spanning patient age, mass, height, and gender, we tested whether the system performed within clinically acceptable limits. Throughout all experiments we subjected the system to disturbances, simulating treatment of refractory status epilepticus in a real-world intensive care unit environment. Main results In 5400 simulations, CLAD behavior remained within specifications. Transient behavior after a step in target BSP from 0.2 to 0.8 exhibited a rise time (the median (min, max)) of 1.4 [1.1, 1.9] min; settling time, 7.8 [4.2, 9.0] min; and percent overshoot of 9.6 [2.3, 10.8]%. Under steady state conditions the CLAD system exhibited a median error of 0.1 [−0.5, 0.9]%; inaccuracy of 1.8 [0.9, 3.4]%; oscillation index of 1.8 [0.9, 3.4]%; and maximum instantaneous propofol dose of 4.3 [2.1, 10.5] mg kg−1. The maximum hourly propofol dose was 4.3 [2.1, 10.3] mg kg−1 h−1. Performance fell within clinically acceptable limits for all measures. Significance A CLAD system designed using robust control theory achieves clinically acceptable performance in the presence of realistic unmodeled disturbances and in spite of realistic model uncertainty, while maintaining infusion rates within acceptable safety limits. PMID:26020243
Post-Launch Analysis of Swift's Gamma-Ray Burst Detection Sensitivity
NASA Technical Reports Server (NTRS)
Band, David L.
2005-01-01
The dependence of Swift#s detection sensitivity on a burst#s temporal and spectral properties shapes the detected burst population. Using s implified models of the detector hardware and the burst trigger syste m I find that Swift is more sensitive to long, soft bursts than CGRO# s BATSE, a reference mission because of its large burst database. Thu s Swift has increased sensitivity in the parameter space region into which time dilation and spectral redshifting shift high redshift burs ts.
Transforming the advanced lab: Part I - Learning goals
NASA Astrophysics Data System (ADS)
Zwickl, Benjamin; Finkelstein, Noah; Lewandowski, H. J.
2012-02-01
Within the physics education research community relatively little attention has been given to laboratory courses, especially at the upper-division undergraduate level. As part of transforming our senior-level Optics and Modern Physics Lab at the University of Colorado Boulder we are developing learning goals, revising curricula, and creating assessments. In this paper, we report on the establishment of our learning goals and a surrounding framework that have emerged from discussions with a wide variety of faculty, from a review of the literature on labs, and from identifying the goals of existing lab courses. Our goals go beyond those of specific physics content and apparatus, allowing instructors to personalize them to their contexts. We report on four broad themes and associated learning goals: Modeling (math-physics-data connection, statistical error analysis, systematic error, modeling of engineered "black boxes"), Design (of experiments, apparatus, programs, troubleshooting), Communication, and Technical Lab Skills (computer-aided data analysis, LabVIEW, test and measurement equipment).
X-Ray Reflection and an Exceptionally Long Thermonuclear Helium Burst from IGR J17062-6143
NASA Technical Reports Server (NTRS)
Keek, L.; Iwakiri, W.; Serino, M.; Ballantyne, D. R.; in’t Zand, J. J. M.; Strohmayer, T. E.
2017-01-01
Thermonuclear X-ray bursts from accreting neutron stars power brief but strong irradiation of their surroundings, providing a unique way to study accretion physics. We analyze MAXI/Gas Slit Camera and Swift/XRT spectra of a day-long flash observed from IGR J17062-6143 in 2015. It is a rare case of recurring bursts at a low accretion luminosity of 0.15% Eddington. Spectra from MAXI, Chandra, and NuSTAR observations taken between the 2015 burst and the previous one in 2012 are used to determine the accretion column. We find it to be consistent with the burst ignition column of 5x10(exp 10) g cm (exp -2), which indicates that it is likely powered by burning in a deep helium layer. The burst flux is observed for over a day, and decays as a straight power law: F gamma t (exp -1.15). The burst and persistent spectra are well described by thermal emission from the neutron star, Comptonization of this emission in a hot optically thin medium surrounding the star, and reflection off the photoionized accretion disk. At the burst peak, the Comptonized component disappears, when the burst may dissipate the Comptonizing gas, and it returns in the burst tail. The reflection signal suggests that the inner disk is truncated at approximately 102 gravitational radii before the burst, but may move closer to the star during the burst. At the end of the burst, the flux drops below the burst cooling trend for 2 days, before returning to the pre-burst level.
Time-dependent clustering analysis of the second BATSE gamma-ray burst catalog
NASA Technical Reports Server (NTRS)
Brainerd, J. J.; Meegan, C. A.; Briggs, Michael S.; Pendleton, G. N.; Brock, M. N.
1995-01-01
A time-dependent two-point correlation-function analysis of the Burst and Transient Source Experiment (BATSE) 2B catalog finds no evidence of burst repetition. As part of this analysis, we discuss the effects of sky exposure on the observability of burst repetition and present the equation describing the signature of burst repetition in the data. For a model of all burst repetition from a source occurring in less than five days we derive upper limits on the number of bursts in the catalog from repeaters and model-dependent upper limits on the fraction of burst sources that produce multiple outbursts.
Do gamma-ray burst sources repeat?
NASA Technical Reports Server (NTRS)
Meegan, C. A.; Hartmann, D. H.; Brainerd, J. J.; Briggs, M.; Paciesas, W. S.; Pendleton, G.; Kouveliotou, C.; Fishman, G.; Blumenthal, G.; Brock, M.
1994-01-01
The demonstration of repeated gamma-ray bursts from an individual source would severely constrain burst source models. Recent reports of evidence for repetition in the first BATSE burst catalog have generated renewed interest in this issue. Here, we analyze the angular distribution of 585 bursts of the second BATSE catalog (Meegan et al. 1994). We search for evidence of burst recurrence using the nearest and farthest neighbor statistic ad the two-point angular correlation function. We find the data to be consistent with the hypothesis that burst sources do not repeat; however, a repeater fraction of up to about 20% of the bursts cannot be excluded.
Burst Firing is a Neural Code in an Insect Auditory System
Eyherabide, Hugo G.; Rokem, Ariel; Herz, Andreas V. M.; Samengo, Inés
2008-01-01
Various classes of neurons alternate between high-frequency discharges and silent intervals. This phenomenon is called burst firing. To analyze burst activity in an insect system, grasshopper auditory receptor neurons were recorded in vivo for several distinct stimulus types. The experimental data show that both burst probability and burst characteristics are strongly influenced by temporal modulations of the acoustic stimulus. The tendency to burst, hence, is not only determined by cell-intrinsic processes, but also by their interaction with the stimulus time course. We study this interaction quantitatively and observe that bursts containing a certain number of spikes occur shortly after stimulus deflections of specific intensity and duration. Our findings suggest a sparse neural code where information about the stimulus is represented by the number of spikes per burst, irrespective of the detailed interspike-interval structure within a burst. This compact representation cannot be interpreted as a firing-rate code. An information-theoretical analysis reveals that the number of spikes per burst reliably conveys information about the amplitude and duration of sound transients, whereas their time of occurrence is reflected by the burst onset time. The investigated neurons encode almost half of the total transmitted information in burst activity. PMID:18946533
Self-Organization on Social Media: Endo-Exo Bursts and Baseline Fluctuations
Oka, Mizuki; Hashimoto, Yasuhiro; Ikegami, Takashi
2014-01-01
A salient dynamic property of social media is bursting behavior. In this paper, we study bursting behavior in terms of the temporal relation between a preceding baseline fluctuation and the successive burst response using a frequency time series of 3,000 keywords on Twitter. We found that there is a fluctuation threshold up to which the burst size increases as the fluctuation increases and that above the threshold, there appears a variety of burst sizes. We call this threshold the critical threshold. Investigating this threshold in relation to endogenous bursts and exogenous bursts based on peak ratio and burst size reveals that the bursts below this threshold are endogenously caused and above this threshold, exogenous bursts emerge. Analysis of the 3,000 keywords shows that all the nouns have both endogenous and exogenous origins of bursts and that each keyword has a critical threshold in the baseline fluctuation value to distinguish between the two. Having a threshold for an input value for activating the system implies that Twitter is an excitable medium. These findings are useful for characterizing how excitable a keyword is on Twitter and could be used, for example, to predict the response to particular information on social media. PMID:25329610
NASA Astrophysics Data System (ADS)
Spitler, L. G.; Scholz, P.; Hessels, J. W. T.; Bogdanov, S.; Brazier, A.; Camilo, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J.; Ferdman, R. D.; Freire, P. C. C.; Kaspi, V. M.; Lazarus, P.; Lynch, R.; Madsen, E. C.; McLaughlin, M. A.; Patel, C.; Ransom, S. M.; Seymour, A.; Stairs, I. H.; Stappers, B. W.; van Leeuwen, J.; Zhu, W. W.
2016-03-01
Fast radio bursts are millisecond-duration astronomical radio pulses of unknown physical origin that appear to come from extragalactic distances. Previous follow-up observations have failed to find additional bursts at the same dispersion measure (that is, the integrated column density of free electrons between source and telescope) and sky position as the original detections. The apparent non-repeating nature of these bursts has led to the suggestion that they originate in cataclysmic events. Here we report observations of ten additional bursts from the direction of the fast radio burst FRB 121102. These bursts have dispersion measures and sky positions consistent with the original burst. This unambiguously identifies FRB 121102 as repeating and demonstrates that its source survives the energetic events that cause the bursts. Additionally, the bursts from FRB 121102 show a wide range of spectral shapes that appear to be predominantly intrinsic to the source and which vary on timescales of minutes or less. Although there may be multiple physical origins for the population of fast radio bursts, these repeat bursts with high dispersion measure and variable spectra specifically seen from the direction of FRB 121102 support an origin in a young, highly magnetized, extragalactic neutron star.
NASA Technical Reports Server (NTRS)
Kundu, M. R. (Editor); Gergely, T. E.
1980-01-01
Papers are presented in the areas of the radio characteristics of the quiet sun and active regions, the centimeter, meter and decameter wavelength characteristics of solar bursts, space observations of low-frequency bursts, theoretical interpretations of solar active regions and bursts, joint radio, visual and X-ray observations of active regions and bursts, and the similarities of stellar radio characteristics to solar radio phenomena. Specific topics include the centimeter and millimeter wave characteristics of the quiet sun, radio fluctuations arising upon the transit of shock waves through the transition region, microwave, EUV and X-ray observations of active region loops and filaments, interferometric observations of 35-GHz radio bursts, emission mechanisms for radio bursts, the spatial structure of microwave bursts, observations of type III bursts, the statistics of type I bursts, and the numerical simulation of type III bursts. Attention is also given to the theory of type IV decimeter bursts, Voyager observations of type II and III bursts at kilometric wavelengths, radio and whitelight observations of coronal transients, and the possibility of obtaining radio observations of current sheets on the sun.
Spitler, L G; Scholz, P; Hessels, J W T; Bogdanov, S; Brazier, A; Camilo, F; Chatterjee, S; Cordes, J M; Crawford, F; Deneva, J; Ferdman, R D; Freire, P C C; Kaspi, V M; Lazarus, P; Lynch, R; Madsen, E C; McLaughlin, M A; Patel, C; Ransom, S M; Seymour, A; Stairs, I H; Stappers, B W; van Leeuwen, J; Zhu, W W
2016-03-10
Fast radio bursts are millisecond-duration astronomical radio pulses of unknown physical origin that appear to come from extragalactic distances. Previous follow-up observations have failed to find additional bursts at the same dispersion measure (that is, the integrated column density of free electrons between source and telescope) and sky position as the original detections. The apparent non-repeating nature of these bursts has led to the suggestion that they originate in cataclysmic events. Here we report observations of ten additional bursts from the direction of the fast radio burst FRB 121102. These bursts have dispersion measures and sky positions consistent with the original burst. This unambiguously identifies FRB 121102 as repeating and demonstrates that its source survives the energetic events that cause the bursts. Additionally, the bursts from FRB 121102 show a wide range of spectral shapes that appear to be predominantly intrinsic to the source and which vary on timescales of minutes or less. Although there may be multiple physical origins for the population of fast radio bursts, these repeat bursts with high dispersion measure and variable spectra specifically seen from the direction of FRB 121102 support an origin in a young, highly magnetized, extragalactic neutron star.
Characterizing Oscillatory Bursts in Single-Trial EEG Data
NASA Technical Reports Server (NTRS)
Knuth, K. H.; Shah, A. S.; Lakatos, P.; Schroeder, C. E.
2004-01-01
Oscillatory bursts in numerous bands ranging from low (theta) to high frequencies (e.g., gamma) undoubtedly play an important role in cortical dynamics. Largely because of the inadequacy of existing analytic techniques. however, oscillatory bursts and their role in cortical processing remains poorly understood. To study oscillatory bursts effectively one must be able to isolate them and characterize them in the single trial. We describe a series of straightforward analysis techniques that produce useful indices of burst characteristics. First, stimulus-evoked responses are estimated using Differentially Variable Component Analysis (dVCA), and are subtracted from the single-trial. The single-trial characteristics of the evoked responses are stored to identify possible correlations with burst activity. Time-frequency (T-F), or wavelet, analyses are then applied to the single trial residuals. While T-F plots have been used in recent studies to identify and isolate bursts, we go further by fitting each burst in the T-F plot with a two-dimensional Gaussian. This provides a set of burst characteristics, such as, center time. burst duration, center frequency. frequency dispersion. and amplitude, all of which contribute to the accurate characterization of the individual burst. The burst phase can also be estimated. Burst characteristics can be quantified with several standard techniques (e.g.. histogramming and clustering), as well as Bayesian techniques (e.g., blocking) to allow a more parametric description analysis of the characteristics of oscillatory bursts, and the relationships of specific parameters to cortical excitability and stimulus integration.
Performance analysis of signaling protocols on OBS switches
NASA Astrophysics Data System (ADS)
Kirci, Pinar; Zaim, A. Halim
2005-10-01
In this paper, Just-In-Time (JIT), Just-Enough-Time (JET) and Horizon signalling schemes for Optical Burst Switched Networks (OBS) are presented. These signaling schemes run over a core dWDM network and a network architecture based on Optical Burst Switches (OBS) is proposed to support IP, ATM and Burst traffic. In IP and ATM traffic several packets are assembled in a single packet called burst and the burst contention is handled by burst dropping. The burst length distribution in IP traffic is arbitrary between 0 and 1, and is fixed in ATM traffic at 0,5. Burst traffic on the other hand is arbitrary between 1 and 5. The Setup and Setup ack length distributions are arbitrary. We apply the Poisson model with rate λ and Self-Similar model with pareto distribution rate α to identify inter-arrival times in these protocols. We consider a communication between a source client node and a destination client node over an ingress and one or more multiple intermediate switches.We use buffering only in the ingress node. The communication is based on single burst connections in which, the connection is set up just before sending a burst and then closed as soon as the burst is sent. Our analysis accounts for several important parameters, including the burst setup, burst setup ack, keepalive messages and the optical switching protocol. We compare the performance of the three signalling schemes on the network under as burst dropping probability under a range of network scenarios.
Bursting as a source of non-linear determinism in the firing patterns of nigral dopamine neurons
Jeong, Jaeseung; Shi, Wei-Xing; Hoffman, Ralph; Oh, Jihoon; Gore, John C.; Bunney, Benjamin S.; Peterson, Bradley S.
2012-01-01
Nigral dopamine (DA) neurons in vivo exhibit complex firing patterns consisting of tonic single-spikes and phasic bursts that encode information for certain types of reward-related learning and behavior. Non-linear dynamical analysis has previously demonstrated the presence of a non-linear deterministic structure in complex firing patterns of DA neurons, yet the origin of this non-linear determinism remains unknown. In this study, we hypothesized that bursting activity is the primary source of non-linear determinism in the firing patterns of DA neurons. To test this hypothesis, we investigated the dimension complexity of inter-spike interval data recorded in vivo from bursting and non-bursting DA neurons in the chloral hydrate-anesthetized rat substantia nigra. We found that bursting DA neurons exhibited non-linear determinism in their firing patterns, whereas non-bursting DA neurons showed truly stochastic firing patterns. Determinism was also detected in the isolated burst and inter-burst interval data extracted from firing patterns of bursting neurons. Moreover, less bursting DA neurons in halothane-anesthetized rats exhibited higher dimensional spiking dynamics than do more bursting DA neurons in chloral hydrate-anesthetized rats. These results strongly indicate that bursting activity is the main source of low-dimensional, non-linear determinism in the firing patterns of DA neurons. This finding furthermore suggests that bursts are the likely carriers of meaningful information in the firing activities of DA neurons. PMID:22831464
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keek, L.; Heger, A., E-mail: laurens.keek@nasa.gov
Thermonuclear flashes of hydrogen and helium accreted onto neutron stars produce the frequently observed Type I X-ray bursts. It is the current paradigm that almost all material burns in a burst, after which it takes hours to accumulate fresh fuel for the next burst. In rare cases, however, bursts are observed with recurrence times as short as minutes. We present the first one-dimensional multi-zone simulations that reproduce this phenomenon. Bursts that ignite in a relatively hot neutron star envelope leave a substantial fraction of the fuel unburned at shallow depths. In the wake of the burst, convective mixing events drivenmore » by opacity bring this fuel down to the ignition depth on the observed timescale of minutes. There, unburned hydrogen mixes with the metal-rich ashes, igniting to produce a subsequent burst. We find burst pairs and triplets, similar to the observed instances. Our simulations reproduce the observed fraction of bursts with short waiting times of ∼30%, and demonstrate that short recurrence time bursts are typically less bright and of shorter duration.« less
A Non-Triggered Burst Supplement to the BATSE Gamma-Ray Burst Catalogs
NASA Technical Reports Server (NTRS)
Kommers, J.; Lewin, W. H.; Kouveliotou, C.; vanParadijs, J.; Pendleton, G. N.; Meegan, C. A.; Fishman, G. J.
1998-01-01
The Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory detects gamma-ray bursts (GRBs) with a real-time burst detection (or "trigger") system running onboard the spacecraft. Under some circumstances, however, a GRB may not activate the onboard burst trigger. For example, the burst may be too faint to exceed the onboard detection threshold, or it may occur while the onboard burst trigger is disabled for technical reasons. This paper is a catalog of such "non-triggered" GRBs that were detected in a search of the archival continuous data from BATSE. It lists 873 non-triggered bursts that were recorded between 1991 December 9.0 and 1997 December 17.0. For each burst, the catalog gives an estimated source direction, duration, peak flux, and fluence. Similar data are presented for 50 additional bursts of unknown origin that were detected in the 25-50 keV range; these events may represent the low-energy "tail" of the GRB spectral distribution. This catalog increases the number of GRBs detected with BATSE by 48% during the time period covered by the search.
UWB multi-burst transmit driver for averaging receivers
Dallum, Gregory E
2012-11-20
A multi-burst transmitter for ultra-wideband (UWB) communication systems generates a sequence of precisely spaced RF bursts from a single trigger event. There are two oscillators in the transmitter circuit, a gated burst rate oscillator and a gated RF burst or RF power output oscillator. The burst rate oscillator produces a relatively low frequency, i.e., MHz, square wave output for a selected transmit cycle, and drives the RF burst oscillator, which produces RF bursts of much higher frequency, i.e., GHz, during the transmit cycle. The frequency of the burst rate oscillator sets the spacing of the RF burst packets. The first oscillator output passes through a bias driver to the second oscillator. The bias driver conditions, e.g., level shifts, the signal from the first oscillator for input into the second oscillator, and also controls the length of each RF burst. A trigger pulse actuates a timing circuit, formed of a flip-flop and associated reset time delay circuit, that controls the operation of the first oscillator, i.e., how long it oscillates (which defines the transmit cycle).
NASA Technical Reports Server (NTRS)
Keek, L.; Heger, A.
2017-01-01
Thermonuclear flashes of hydrogen and helium accreted onto neutron stars produce the frequently observed Type I X-ray bursts. It is the current paradigm that almost all material burns in a burst, after which it takes hours to accumulate fresh fuel for the next burst. In rare cases, however, bursts are observed with recurrence times as short as minutes. We present the first one-dimensional multi-zone simulations that reproduce this phenomenon. Bursts that ignite in a relatively hot neutron star envelope leave a substantial fraction of the fuel unburned at shallow depths. In the wake of the burst, convective mixing events driven by opacity bring this fuel down to the ignition depth on the observed timescale of minutes. There, unburned hydrogen mixes with the metal-rich ashes, igniting to produce a subsequent burst. We find burst pairs and triplets, similar to the observed instances. Our simulations reproduce the observed fraction of bursts with short waiting times of approximately 30%, and demonstrate that short recurrence time bursts are typically less bright and of shorter duration.
Properties of the Second Outburst of the Bursting Pulsar (GRO J1744-28) as Observed with BATSE
NASA Technical Reports Server (NTRS)
Woods, P.; Kouveliotou, C.; vanParadijs, J.; Briggs, M. S.; Wilson, C. A.; Deal, K. J.; Harmon, B. A.; Fishman, G. J.; Lewin, W. H.; Kommers, J.
1998-01-01
One year after its discovery, the Bursting Pulsar (GRO J1744-28) went into outburst again, displaying the hard X-ray bursts and pulsations that make this source unique. We report on Burst and Transient Source Experiment (BATSE) observations of both the persistent and burst emission for this second outburst and draw comparisons to the first. The second outburst was smaller than the first in both duration and peak luminosity. The persistent flux, burst peak flux and burst fluence were all reduced in amplitude by a factor approximately 1.7. Despite these differences, the average burst occurrence rate and average burst durations were roughly the same through each outburst. Similar to the first outburst, no spectral evolution was found within bursts and the parameter alpha was very small at the start of the outburst (alpha = 2.1 +/- 1.7 on 1996 December 2). Although no spectral evolution was found within individual bursts, we find evidence for a small (20%) variation of the spectral temperature during the course of the second outburst.
NASA Astrophysics Data System (ADS)
Acuner, Zeynep; Ryde, Felix
2018-04-01
Many different physical processes have been suggested to explain the prompt gamma-ray emission in gamma-ray bursts (GRBs). Although there are examples of both bursts with photospheric and synchrotron emission origins, these distinct spectral appearances have not been generalized to large samples of GRBs. Here, we search for signatures of the different emission mechanisms in the full Fermi Gamma-ray Space Telescope/GBM (Gamma-ray Burst Monitor) catalogue. We use Gaussian Mixture Models to cluster bursts according to their parameters from the Band function (α, β, and Epk) as well as their fluence and T90. We find five distinct clusters. We further argue that these clusters can be divided into bursts of photospheric origin (2/3 of all bursts, divided into three clusters) and bursts of synchrotron origin (1/3 of all bursts, divided into two clusters). For instance, the cluster that contains predominantly short bursts is consistent of photospheric emission origin. We discuss several reasons that can determine which cluster a burst belongs to: jet dissipation pattern and/or the jet content, or viewing angle.
NASA Astrophysics Data System (ADS)
Yu, Maolin; Du, R.
2005-08-01
Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.
Ferraro, Jeffrey P; Daumé, Hal; Duvall, Scott L; Chapman, Wendy W; Harkema, Henk; Haug, Peter J
2013-01-01
Natural language processing (NLP) tasks are commonly decomposed into subtasks, chained together to form processing pipelines. The residual error produced in these subtasks propagates, adversely affecting the end objectives. Limited availability of annotated clinical data remains a barrier to reaching state-of-the-art operating characteristics using statistically based NLP tools in the clinical domain. Here we explore the unique linguistic constructions of clinical texts and demonstrate the loss in operating characteristics when out-of-the-box part-of-speech (POS) tagging tools are applied to the clinical domain. We test a domain adaptation approach integrating a novel lexical-generation probability rule used in a transformation-based learner to boost POS performance on clinical narratives. Two target corpora from independent healthcare institutions were constructed from high frequency clinical narratives. Four leading POS taggers with their out-of-the-box models trained from general English and biomedical abstracts were evaluated against these clinical corpora. A high performing domain adaptation method, Easy Adapt, was compared to our newly proposed method ClinAdapt. The evaluated POS taggers drop in accuracy by 8.5-15% when tested on clinical narratives. The highest performing tagger reports an accuracy of 88.6%. Domain adaptation with Easy Adapt reports accuracies of 88.3-91.0% on clinical texts. ClinAdapt reports 93.2-93.9%. ClinAdapt successfully boosts POS tagging performance through domain adaptation requiring a modest amount of annotated clinical data. Improving the performance of critical NLP subtasks is expected to reduce pipeline error propagation leading to better overall results on complex processing tasks.
Wang, Junmei; Hou, Tingjun
2011-12-01
In this work, we have evaluated how well the general assisted model building with energy refinement (AMBER) force field performs in studying the dynamic properties of liquids. Diffusion coefficients (D) have been predicted for 17 solvents, five organic compounds in aqueous solutions, four proteins in aqueous solutions, and nine organic compounds in nonaqueous solutions. An efficient sampling strategy has been proposed and tested in the calculation of the diffusion coefficients of solutes in solutions. There are two major findings of this study. First of all, the diffusion coefficients of organic solutes in aqueous solution can be well predicted: the average unsigned errors and the root mean square errors are 0.137 and 0.171 × 10(-5) cm(-2) s(-1), respectively. Second, although the absolute values of D cannot be predicted, good correlations have been achieved for eight organic solvents with experimental data (R(2) = 0.784), four proteins in aqueous solutions (R(2) = 0.996), and nine organic compounds in nonaqueous solutions (R(2) = 0.834). The temperature dependent behaviors of three solvents, namely, TIP3P water, dimethyl sulfoxide, and cyclohexane have been studied. The major molecular dynamics (MD) settings, such as the sizes of simulation boxes and with/without wrapping the coordinates of MD snapshots into the primary simulation boxes have been explored. We have concluded that our sampling strategy that averaging the mean square displacement collected in multiple short-MD simulations is efficient in predicting diffusion coefficients of solutes at infinite dilution. Copyright © 2011 Wiley Periodicals, Inc.
Beta burst dynamics in Parkinson's disease OFF and ON dopaminergic medication.
Tinkhauser, Gerd; Pogosyan, Alek; Tan, Huiling; Herz, Damian M; Kühn, Andrea A; Brown, Peter
2017-11-01
Exaggerated basal ganglia beta activity (13-35 Hz) is commonly found in patients with Parkinson's disease and can be suppressed by dopaminergic medication, with the degree of suppression being correlated with the improvement in motor symptoms. Importantly, beta activity is not continuously elevated, but fluctuates to give beta bursts. The percentage number of longer beta bursts in a given interval is positively correlated with clinical impairment in Parkinson's disease patients. Here we determine whether the characteristics of beta bursts are dependent on dopaminergic state. Local field potentials were recorded from the subthalamic nucleus of eight Parkinson's disease patients during temporary lead externalization during surgery for deep brain stimulation. The recordings took place with the patient quietly seated following overnight withdrawal of levodopa and after administration of levodopa. Beta bursts were defined by applying a common amplitude threshold and burst characteristics were compared between the two drug conditions. The amplitude of beta bursts, indicative of the degree of local neural synchronization, progressively increased with burst duration. Treatment with levodopa limited this evolution leading to a relative increase of shorter, lower amplitude bursts. Synchronization, however, was not limited to local neural populations during bursts, but also, when such bursts were cotemporaneous across the hemispheres, was evidenced by bilateral phase synchronization. The probability of beta bursts and the proportion of cotemporaneous bursts were reduced by levodopa. The percentage number of longer beta bursts in a given interval was positively related to motor impairment, while the opposite was true for the percentage number of short duration beta bursts. Importantly, the decrease in burst duration was also correlated with the motor improvement. In conclusion, we demonstrate that long duration beta bursts are associated with an increase in local and interhemispheric synchronization. This may compromise information coding capacity and thereby motor processing. Dopaminergic activity limits this uncontrolled beta synchronization by terminating long duration beta bursts, with positive consequences on network state and motor symptoms. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain.
NASA Technical Reports Server (NTRS)
2006-01-01
Gamma-Ray bursts, the extreme explosions that mark the birth of black holes, come in two flavors, long bursts lasting a few seconds or more, and short bursts lasting for less than a second. The mechanisms giving rise to these two types of bursts were, for a long time, unknown to astronomers. But a series of breakthroughs starting with BeppoSAX, HETE, and Swift gave astronomers some clues and confidence about the nature of long and short bursts. Long bursts mark the collapse of a young, extremely massive star into a black hole; short bursts mark the formation of a black hole by a merger of neutron stars (or perhaps a neutron star with a black hole to form a larger black hole). But a new observation has clouded this clear dichotomy. The picture above is an X-ray image of a gamma-ray burst, GRB 060614, taken by Swift's X-ray Telescope. This burst lasted more than 100 seconds, clearly showing that it's a long burst. But follow-up observations of the burst did not show the tell-tale signatures of a supernova explosion which should be produced by the collapse of a large star. Furthermore this burst occurred in a galaxy which has very few extremely massive stars. Does this hybrid burst represent an entirely new mechanism behind these titanic explosions? The hunt is on.
The Second Swift BAT Gamma-Ray Burst Catalog
NASA Technical Reports Server (NTRS)
Barthelmy, S. D.; Baumgartner, W. H.; Cummings, J. R.; Fenimore, E. E.; Gehrels, N.; Krimm, H. A.; Markwardt, C. B.; Palmer, D. M.; Parsons, A. M.; Sato, G.;
2010-01-01
We present the second Swift Burst Alert Telescope (BAT) catalog of gamma-ray bursts (GRBs), which contains 476 bursts detected by the BAT between 2004 December 19 and 2009 December 21. This catalog (hereafter the BAT2 catalog) presents burst trigger time, location, 90% error radius, duration, fluence, peak flux, time-averaged spectral parameters and time-resolved spectral parametert:; measured by the BAT. In the correlation study of various observed parameters extracted from the BAT prompt emission data, we distinguish among long-duration GRBs (L-GRBs), short-duration GRBs (S-GRBs), and short-duration GRBs with extended emission (S-GRBs with E.E.) to investigate differences in the prompt emission properties. The fraction of L-GRBs, S-GRBs and S-GRBs with E.E. in the catalog are 89%, 8% and 2% respectively. We compare the BAT prompt emission properties with the BATSE, BeppoSAX and HETE-2 GRB samples. We also correlate the observed prompt emission properties with the redshifts for the GRBs with known redshift. The BAT T90 and T50 durations peak at 70 s and 30 s, respectively. We confirm that the spectra of the BAT S-GRBs are generally harder than those of the L-GRBs. The time-averaged spectra of the BAT S GRBs with E.E. are similar to those of the L-GRBs. Whereas, the spectra of the initial short spikes of the S-GRBs with E.E. are similar to those of the S-GRBs. We show that the BAT GRB samples are significantly softer than the BATSE bright GRBs, and that the time-averaged E obs/peak of the BAT GRBs peaks at 80 keV which is significantly lower energy than those of the BATSE sample which peak at 320 keV. The time-averaged spectral properties of the BAT GRB sample are similar to those of the HETE-2 GRB samples. By time-resolved spectral analysis, we find that 10% of the BAT observed photon indices are outside the allowed region of the synchrotron shock model. The observed durations of the BAT high redshift GRBs are not systematically longer than those of the moderate red shift GRBs. Furthermore, the observed spectra of the BAT high red shift GRBs are similar to or harder than the moderate red shift GRBs. The T90 and T50 distributions measured at the 140-220 keY band in the GRB rest frame form the BAT known redshift GRBs peak at 19 sand 8 s, respectively. We also provide an update on the status of the on-orbit BAT calibrations.
Analysis of variability in the burst oscillations of the accreting millisecond pulsar XTE J1814-338
NASA Technical Reports Server (NTRS)
Watts, Anna L.; Strohmayer, Tod E.; Markwardt, Craig B.
2005-01-01
The accreting millisecond pulsar XTE J1814-338 exhibits oscillations at the known spin frequency during Type I X-ray bursts. The properties of the burst oscillations reflect the nature of the thermal asymmetry on the stellar surface. We present an analysis of the variability of the burst oscillations of this source, focusing on three characteristics: fractional amplitude, harmonic content and frequency. Fractional amplitude and harmonic content constrain the size, shape and position of the emitting region, whilst variations in frequency indicate motion of the emitting region on the neutron star surface. We examine both long-term variability over the course of the outburst, and short-term variability during the bursts. For most of the bursts, fractional amplitude is consistent with that of the accretion pulsations, implying a low degree of fuel spread. There is however a population of bursts whose fractional amplitudes are substantially lower, implying a higher degree of fuel spread, possibly forced by the explosive burning front of a precursor burst. For the first harmonic, substantial differences between the burst and accretion pulsations suggest that hotspot geometry is not the only mechanism giving rise to harmonic content in the latter. Fractional amplitude variability during the bursts is low; we can only rule out the hypothesis that the fractional amplitude remains constant at the l(sigma) level for bursts that do not exhibit photospheric radius expansion (PRE). There are no significant variations in frequency in any of the bursts except for the one burst that exhibits PRE. This burst exhibits a highly significant but small (= 0.1Hz) drop in frequency in the burst rise. The timescale of the frequency shift is slower than simple burning layer expansion models predict, suggesting that other mechanisms may be at work.
X-Ray Reflection and an Exceptionally Long Thermonuclear Helium Burst from IGR J17062-6143
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keek, L.; Strohmayer, T. E.; Iwakiri, W.
Thermonuclear X-ray bursts from accreting neutron stars power brief but strong irradiation of their surroundings, providing a unique way to study accretion physics. We analyze MAXI /Gas Slit Camera and Swift /XRT spectra of a day-long flash observed from IGR J17062-6143 in 2015. It is a rare case of recurring bursts at a low accretion luminosity of 0.15% Eddington. Spectra from MAXI , Chandra , and NuSTAR observations taken between the 2015 burst and the previous one in 2012 are used to determine the accretion column. We find it to be consistent with the burst ignition column of 5×10{sup 10}more » g cm{sup −2}, which indicates that it is likely powered by burning in a deep helium layer. The burst flux is observed for over a day, and decays as a straight power law: F ∝ t {sup −1.15}. The burst and persistent spectra are well described by thermal emission from the neutron star, Comptonization of this emission in a hot optically thin medium surrounding the star, and reflection off the photoionized accretion disk. At the burst peak, the Comptonized component disappears, when the burst may dissipate the Comptonizing gas, and it returns in the burst tail. The reflection signal suggests that the inner disk is truncated at ∼10{sup 2} gravitational radii before the burst, but may move closer to the star during the burst. At the end of the burst, the flux drops below the burst cooling trend for 2 days, before returning to the pre-burst level.« less
The application of network synthesis to repeating classical gamma-ray bursts
NASA Technical Reports Server (NTRS)
Hurley, K.; Kouveliotou, C.; Fishman, J.; Meegan, C.; Laros, J.; Klebesadel, R.
1995-01-01
It has been suggested that the Burst and Transient Source Experiment (BATSE) gamma-ray burst catalog contains several groups of bursts clustered in space or in space and time, which provide evidence that a substantial fraction of the classical gamma-ray burst sources repeat. Because many of the bursts in these groups are weak, they are not directly detected by the Ulysses GRB experiment. We apply the network synthesis method to these events to test the repeating burst hypothesis. Although we find no evidence for repeating sources, the method must be applied under more general conditions before reaching any definite conclusions about the existence of classical gamma-ray burst repeating sources.
Do gamma-ray burst sources repeat?
NASA Technical Reports Server (NTRS)
Meegan, Charles A.; Hartmann, Dieter H.; Brainerd, J. J.; Briggs, Michael S.; Paciesas, William S.; Pendleton, Geoffrey; Kouveliotou, Chryssa; Fishman, Gerald; Blumenthal, George; Brock, Martin
1995-01-01
The demonstration of repeated gamma-ray bursts from an individual source would severely constrain burst source models. Recent reports (Quashnock and Lamb, 1993; Wang and Lingenfelter, 1993) of evidence for repetition in the first BATSE burst catalog have generated renewed interest in this issue. Here, we analyze the angular distribution of 585 bursts of the second BATSE catalog (Meegan et al., 1994). We search for evidence of burst recurrence using the nearest and farthest neighbor statistic and the two-point angular correlation function. We find the data to be consistent with the hypothesis that burst sources do not repeat; however, a repeater fraction of up to about 20% of the observed bursts cannot be excluded.
Hybrid data storage system in an HPC exascale environment
Bent, John M.; Faibish, Sorin; Gupta, Uday K.; Tzelnic, Percy; Ting, Dennis P. J.
2015-08-18
A computer-executable method, system, and computer program product for managing I/O requests from a compute node in communication with a data storage system, including a first burst buffer node and a second burst buffer node, the computer-executable method, system, and computer program product comprising striping data on the first burst buffer node and the second burst buffer node, wherein a first portion of the data is communicated to the first burst buffer node and a second portion of the data is communicated to the second burst buffer node, processing the first portion of the data at the first burst buffer node, and processing the second portion of the data at the second burst buffer node.
Shah, Priya; Wyatt, Jeremy C; Makubate, Boikanyo; Cross, Frank W
2011-01-01
Objective Expert authorities recommend clinical decision support systems to reduce prescribing error rates, yet large numbers of insignificant on-screen alerts presented in modal dialog boxes persistently interrupt clinicians, limiting the effectiveness of these systems. This study compared the impact of modal and non-modal electronic (e-) prescribing alerts on prescribing error rates, to help inform the design of clinical decision support systems. Design A randomized study of 24 junior doctors each performing 30 simulated prescribing tasks in random order with a prototype e-prescribing system. Using a within-participant design, doctors were randomized to be shown one of three types of e-prescribing alert (modal, non-modal, no alert) during each prescribing task. Measurements The main outcome measure was prescribing error rate. Structured interviews were performed to elicit participants' preferences for the prescribing alerts and their views on clinical decision support systems. Results Participants exposed to modal alerts were 11.6 times less likely to make a prescribing error than those not shown an alert (OR 11.56, 95% CI 6.00 to 22.26). Those shown a non-modal alert were 3.2 times less likely to make a prescribing error (OR 3.18, 95% CI 1.91 to 5.30) than those not shown an alert. The error rate with non-modal alerts was 3.6 times higher than with modal alerts (95% CI 1.88 to 7.04). Conclusions Both kinds of e-prescribing alerts significantly reduced prescribing error rates, but modal alerts were over three times more effective than non-modal alerts. This study provides new evidence about the relative effects of modal and non-modal alerts on prescribing outcomes. PMID:21836158
Homologous and Homologous like Microwave Solar Radio Bursts
NASA Astrophysics Data System (ADS)
Trevisan, R. H.; Sawant, H. S.; Kalman, B.; Gesztelyi, L.
1990-11-01
ABSTRACT. Solar radio observations at 1.6 GHz were carried out in the month of July, 1985 by using 13.7 m diameter Itapetinga antenna with time resolution of 3 ms. Homologous Bursts, with total duration of about couple of seconds and repeated by some seconds were observed associated with Homologous H- flares. These H- flares were having periodicities of about 40 min. Observed long periodicities were attributed to oscillation of prominences, and small periods were attributed to removal of plasma from the field interaction zone. Also observed are "Homologous-Like" bursts. These bursts are double peak bursts with same time profile repeating in time. In addition to this, the ratio of the total duration of the bursts to time difference in the peaks of bursts remain constant. Morphological studies of these bursts have been presented. Keq tuoit : SUN-BURSTS - SUN-FLARE
NASA Astrophysics Data System (ADS)
Han, Xiujing; Zhang, Yi; Bi, Qinsheng; Kurths, Jürgen
2018-04-01
This paper aims to report two novel bursting patterns, the turnover-of-pitchfork-hysteresis-induced bursting and the compound pitchfork-hysteresis bursting, demonstrated for the Duffing system with multiple-frequency parametric excitations. Typically, a hysteresis behavior between the origin and non-zero equilibria of the fast subsystem can be observed due to delayed pitchfork bifurcation. Based on numerical analysis, we show that the stable equilibrium branches, related to the non-zero equilibria resulted from the pitchfork bifurcation, may become the ones with twists and turns. Then, the novel bursting pattern turnover-of-pitchfork-hysteresis-induced bursting is revealed accordingly. In particular, we show that additional pitchfork bifurcation points may appear in the fast subsystem under certain parameter conditions. This creates multiple delay-induced hysteresis behavior and helps us to reveal the other novel bursting pattern, the compound pitchfork-hysteresis bursting. Besides, effects of parameters on the bursting patterns are studied to explore the relation of these two novel bursting patterns.
Litovsky, Ruth Y.; Godar, Shelly P.
2010-01-01
The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369
Estimation of error on the cross-correlation, phase and time lag between evenly sampled light curves
NASA Astrophysics Data System (ADS)
Misra, R.; Bora, A.; Dewangan, G.
2018-04-01
Temporal analysis of radiation from Astrophysical sources like Active Galactic Nuclei, X-ray Binaries and Gamma-ray bursts provides information on the geometry and sizes of the emitting regions. Establishing that two light-curves in different energy bands are correlated, and measuring the phase and time-lag between them is an important and frequently used temporal diagnostic. Generally the estimates are done by dividing the light-curves into large number of adjacent intervals to find the variance or by using numerically expensive simulations. In this work we have presented alternative expressions for estimate of the errors on the cross-correlation, phase and time-lag between two shorter light-curves when they cannot be divided into segments. Thus the estimates presented here allow for analysis of light-curves with relatively small number of points, as well as to obtain information on the longest time-scales available. The expressions have been tested using 200 light curves simulated from both white and 1 / f stochastic processes with measurement errors. We also present an application to the XMM-Newton light-curves of the Active Galactic Nucleus, Akn 564. The example shows that the estimates presented here allow for analysis of light-curves with relatively small (∼ 1000) number of points.
NASA Astrophysics Data System (ADS)
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
Solar Radio Bursts and Space Weather
NASA Technical Reports Server (NTRS)
Gopalswamy, Natchimuthuk,
2012-01-01
Radio bursts from the Sun are produced by electron accelerated to relativistic energies by physical processes on the Sun such as solar flares and coronal mass ejections (CMEs). The radio bursts are thus good indicators of solar eruptions. Three types of nonthermal radio bursts are generally associated with CMEs. Type III bursts due to accelerated electrons propagating along open magnetic field lines. The electrons are thought to be accelerated at the reconnection region beneath the erupting CME, although there is another view that the electrons may be accelerated at the CME-driven shock. Type II bursts are due to electrons accelerated at the shock front. Type II bursts are also excellent indicators of solar energetic particle (SEP) events because the same shock is supposed accelerate electrons and ions. There is a hierarchical relationship between the wavelength range of type /I bursts and the CME kinetic energy. Finally, Type IV bursts are due to electrons trapped in moving or stationary structures. The low frequency stationary type IV bursts are observed occasionally in association with very fast CMEs. These bursts originate from flare loops behind the erupting CME and hence indicate tall loops. This paper presents a summary of radio bursts and their relation to CMEs and how they can be useful for space weather predictions.
Variable spreading layer in 4U 1608-52 during thermonuclear X-ray bursts in the soft state
NASA Astrophysics Data System (ADS)
Kajava, J. J. E.; Koljonen, K. I. I.; Nättilä, J.; Suleimanov, V.; Poutanen, J.
2017-11-01
Thermonuclear (type-I) X-ray bursts, observed from neutron star (NS) low-mass X-ray binaries (LMXB), provide constraints on NS masses and radii and consequently the equation of state of NS cores. In such analyses, various assumptions are made without knowing if they are justified. We have analysed X-ray burst spectra from the LMXB 4U 1608-52, with the aim of studying how the different persistent emission components react to the bursts. During some bursts in the soft spectral state we find that there are two variable components: one corresponding to the burst blackbody component and another optically thick Comptonized component. We interpret the latter as the spreading layer between the NS surface and the accretion disc, which is not present during the hard-state bursts. We propose that the spectral changes during the soft-state bursts are driven by the spreading layer that could cover almost the entire NS in the brightest phases due to the enhanced radiation pressure support provided by the burst, and that the layer subsequently returns to its original state during the burst decay. When deriving the NS mass and radius using the soft-state bursts two assumptions are therefore not met: the NS is not entirely visible and the burst emission is reprocessed in the spreading layer, causing distortions of the emitted spectrum. For these reasons, the NS mass and radius constraints using the soft-state bursts are different compared to the ones derived using the hard-state bursts.
Bursting as a source of non-linear determinism in the firing patterns of nigral dopamine neurons.
Jeong, Jaeseung; Shi, Wei-Xing; Hoffman, Ralph; Oh, Jihoon; Gore, John C; Bunney, Benjamin S; Peterson, Bradley S
2012-11-01
Nigral dopamine (DA) neurons in vivo exhibit complex firing patterns consisting of tonic single-spikes and phasic bursts that encode information for certain types of reward-related learning and behavior. Non-linear dynamical analysis has previously demonstrated the presence of a non-linear deterministic structure in complex firing patterns of DA neurons, yet the origin of this non-linear determinism remains unknown. In this study, we hypothesized that bursting activity is the primary source of non-linear determinism in the firing patterns of DA neurons. To test this hypothesis, we investigated the dimension complexity of inter-spike interval data recorded in vivo from bursting and non-bursting DA neurons in the chloral hydrate-anesthetized rat substantia nigra. We found that bursting DA neurons exhibited non-linear determinism in their firing patterns, whereas non-bursting DA neurons showed truly stochastic firing patterns. Determinism was also detected in the isolated burst and inter-burst interval data extracted from firing patterns of bursting neurons. Moreover, less bursting DA neurons in halothane-anesthetized rats exhibited higher dimensional spiking dynamics than do more bursting DA neurons in chloral hydrate-anesthetized rats. These results strongly indicate that bursting activity is the main source of low-dimensional, non-linear determinism in the firing patterns of DA neurons. This finding furthermore suggests that bursts are the likely carriers of meaningful information in the firing activities of DA neurons. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Solar U- and J- Bursts at the Frequencies 10-30MHz
NASA Astrophysics Data System (ADS)
Dorovskyy, V. V.; Melnik, V. N.; Konovalenko, A. A.; Abranin, E. P.; Rucker, H. O.; Lecacheux, A.
2006-08-01
In the present report we discuss the results of observations of solar U- and J- bursts over the frequency range 10-30MHz, which have been obtained within the framework of an international observational campaign in June - August, 2004 at the radio telescope UTR-2 (Kharkov, Ukraine). We succeed to observe these types of bursts for the first time at such a low frequencies due to combination of large effective area of the radio telescope and high sensitivity of the new back-end. During June - August, 2004 about 30 U- and J- bursts were registered, and only 5 of them were confidently identified as U-bursts that may speak about the relative sparsity of the latter at mentioned frequencies. Both the isolated bursts and their sequences were observed. On average the turning frequencies lay in the range 10-22 MHz that corresponds to the arches heliocentric heights of 1.6-2.2 solar radii. In some sequences the bursts turning frequency was stable that may indicate the arch stability, while in others the turning frequency had tendency to vary from burst to burst. Durations of U- and J- bursts did not differ from those of usual Type III bursts (3-7s), while the drift rates of an ascending arm (on the average -1MHz/ s) was a little bit lower, than those of ordinary Type III bursts in this range. The harmonic structure of U- and J- bursts, and also Jb-J pairs (analogous to IIIb-III pairs) were registered. Also L-shaped bursts (Leblanc and Hoyos, 1985) were recorded. A specific feature of L-shaped bursts is prolonged zero-drift region on their dynamic spectra. The sizes and configurations of the arches were estimated on the base of obtained data. Possible explanations of the observed properties of U- and J- bursts are discussed.
Type III Radio Burst Duration and SEP Events
NASA Technical Reports Server (NTRS)
Gopalswamy, N.; Makela, P.; Xie, H.
2010-01-01
Long-duration (>15 min), low-frequency (<14 MHz) type III radio bursts have been reported to be indicative of solar energetic particle events. We measured the durations of type III bursts associated with large SEP events of solar cycle 23. The Type III durations are distributed symmetrically at 1 MHz yielding a mean value of approximately 33 min (median = 32 min) for the large SEP events. When the SEP events with ground level enhancement (GLE,) are considered, the distribution is essentially unchanged (mean = 32 min, median = 30 min). To test the importance of type III bursts in indicating SEP events, we considered a set of six type III bursts from the same active region (AR 10588) whose durations fit the "long duration" criterion. We analyzed the coronal mass ejections (CMEs), flares, and type II radio bursts associated with the type III bursts. The CMEs were of similar speeds and the flares are also of similar size and duration. All but one of the type III bursts was not associated with a type II burst in the metric or longer wavelength domains. The burst without type II burst also lacked a solar energetic particle (SEP) event at energies >25 MeV. The 1-MHz duration of the type III burst (28 rein) is near the median value of type III durations found for gradual SEP events and ground level enhancement (GLE) events. Yet, there was no sign of SEP events. On the other hand, two other type III bursts from the same active region had similar duration but accompanied by WAVES type 11 bursts; these bursts were also accompanied by SEP events detected by SOHO/ERNE. This study suggests that the type III burst duration may not be a good indicator of an SEP event, consistent with the statistical study of Cliver and Ling (2009, ApJ ).
NASA Astrophysics Data System (ADS)
He, Ying; Puckett, Elbridge Gerry; Billen, Magali I.
2017-02-01
Mineral composition has a strong effect on the properties of rocks and is an essentially non-diffusive property in the context of large-scale mantle convection. Due to the non-diffusive nature and the origin of compositionally distinct regions in the Earth the boundaries between distinct regions can be nearly discontinuous. While there are different methods for tracking rock composition in numerical simulations of mantle convection, one must consider trade-offs between computational cost, accuracy or ease of implementation when choosing an appropriate method. Existing methods can be computationally expensive, cause over-/undershoots, smear sharp boundaries, or are not easily adapted to tracking multiple compositional fields. Here we present a Discontinuous Galerkin method with a bound preserving limiter (abbreviated as DG-BP) using a second order Runge-Kutta, strong stability-preserving time discretization method for the advection of non-diffusive fields. First, we show that the method is bound-preserving for a point-wise divergence free flow (e.g., a prescribed circular flow in a box). However, using standard adaptive mesh refinement (AMR) there is an over-shoot error (2%) because the cell average is not preserved during mesh coarsening. The effectiveness of the algorithm for convection-dominated flows is demonstrated using the falling box problem. We find that the DG-BP method maintains sharper compositional boundaries (3-5 elements) as compared to an artificial entropy-viscosity method (6-15 elements), although the over-/undershoot errors are similar. When used with AMR the DG-BP method results in fewer degrees of freedom due to smaller regions of mesh refinement in the neighborhood of the discontinuity. However, using Taylor-Hood elements and a uniform mesh there is an over-/undershoot error on the order of 0.0001%, but this error increases to 0.01-0.10% when using AMR. Therefore, for research problems in which a continuous field method is desired the DG-BP method can provide improved tracking of sharp compositional boundaries. For applications in which strict bound-preserving behavior is desired, use of an element that provides a divergence-free condition on the weak formulation (e.g., Raviart-Thomas) and an improved mesh coarsening scheme for the AMR are required.
Geocoding rural addresses in a community contaminated by PFOA: a comparison of methods.
Vieira, Verónica M; Howard, Gregory J; Gallagher, Lisa G; Fletcher, Tony
2010-04-21
Location is often an important component of exposure assessment, and positional errors in geocoding may result in exposure misclassification. In rural areas, successful geocoding to a street address is limited by rural route boxes. Communities have assigned physical street addresses to rural route boxes as part of E911 readdressing projects for improved emergency response. Our study compared automated and E911 methods for recovering and geocoding valid street addresses and assessed the impact of positional errors on exposure classification. The current study is a secondary analysis of existing data that included 135 addresses self-reported by participants of a rural community study who were exposed via public drinking water to perfluorooctanoate (PFOA) released from a DuPont facility in Parkersburg, West Virginia. We converted pre-E911 to post-E911 addresses using two methods: automated ZP4 address-correction software with the U.S. Postal Service LACS database and E911 data provided by Wood County, West Virginia. Addresses were geocoded using TeleAtlas, an online commercial service, and ArcView with StreetMap Premium North America NAVTEQ 2008 enhanced street dataset. We calculated positional errors using GPS measurements collected at each address and assessed exposure based on geocoded location in relation to public water pipes. The county E911 data converted 89% of the eligible addresses compared to 35% by ZP4 LACS. ArcView/NAVTEQ geocoded more addresses (n = 130) and with smaller median distance between geocodes and GPS coordinates (39 meters) than TeleAtlas (n = 85, 188 meters). Without E911 address conversion, 25% of the geocodes would have been more than 1000 meters from the true location. Positional errors in TeleAtlas geocoding resulted in exposure misclassification of seven addresses whereas ArcView/NAVTEQ methods did not misclassify any addresses. Although the study was limited by small numbers, our results suggest that the use of county E911 data in rural areas increases the rate of successful geocoding. Furthermore, positional accuracy of rural addresses in the study area appears to vary by geocoding method. In a large epidemiological study investigating the health effects of PFOA-contaminated public drinking water, this could potentially result in exposure misclassification if addresses are incorrectly geocoded to a street segment not serviced by public water.
Wagner, Andreas; Rosen, William
2014-08-06
Innovations in biological evolution and in technology have many common features. Some of them involve similar processes, such as trial and error and horizontal information transfer. Others describe analogous outcomes such as multiple independent origins of similar innovations. Yet others display similar temporal patterns such as episodic bursts of change separated by periods of stasis. We review nine such commonalities, and propose that the mathematical concept of a space of innovations, discoveries or designs can help explain them. This concept can also help demolish a persistent conceptual wall between technological and biological innovation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
X-ray bursts: Observation versus theory
NASA Technical Reports Server (NTRS)
Lewin, W. H. G.
1981-01-01
Results of various observations of common type I X-ray bursts are discussed with respect to the theory of thermonuclear flashes in the surface layers of accreting neutron stars. Topics covered include burst profiles; irregular burst intervals; rise and decay times and the role of hydrogen; the accuracy of source distances; accuracy in radii determination; radius increase early in the burst; the super Eddington limit; temperatures at burst maximum; and the role of the magnetic field.
Gamma-Ray Bursts in the Swift Era
NASA Technical Reports Server (NTRS)
Gehrels, Neil; Ramirez-Ruiz, E.; Fox, D. B.
2010-01-01
With its rapid-response capability and multiwavelength complement of instruments, the Swift satellite has transformed our physical understanding of gamma-ray bursts. Providing high-quality observations of hundreds of bursts, and facilitating a wide range of follow-up observations within seconds of each event, Swift has revealed an unforeseen richness in observed burst properties, shed light on the nature of short-duration bursts, and helped realize the promise of gamma-ray bursts as probes of the processes and environments of star formation out to the earliest cosmic epochs. These advances have opened new perspectives on the nature and properties of burst central engines, interactions with the burst environment from microparsec to gigaparsec scales, and the possibilities for non-photonic signatures. Our understanding of these extreme cosmic sources has thus advanced substantially; yet more than forty years after their discovery, gamma-ray bursts continue to present major challenges on both observational and theoretical fronts.
The Five Year Fermi/GBM Magnetar Burst Catalog
NASA Astrophysics Data System (ADS)
Collazzi, A. C.; Kouveliotou, C.; van der Horst, A. J.; Younes, G. A.; Kaneko, Y.; Göğüş, E.; Lin, L.; Granot, J.; Finger, M. H.; Chaplin, V. L.; Huppenkothen, D.; Watts, A. L.; von Kienlin, A.; Baring, M. G.; Gruber, D.; Bhat, P. N.; Gibby, M. H.; Gehrels, N.; McEnery, J.; van der Klis, M.; Wijers, R. A. M. J.
2015-05-01
Since launch in 2008, the Fermi Gamma-ray Burst Monitor (GBM) has detected many hundreds of bursts from magnetar sources. While the vast majority of these bursts have been attributed to several known magnetars, there is also a small sample of magnetar-like bursts of unknown origin. Here, we present the Fermi/GBM magnetar catalog, providing the results of the temporal and spectral analyses of 440 magnetar bursts with high temporal and spectral resolution. This catalog covers the first five years of GBM magnetar observations, from 2008 July to 2013 June. We provide durations, spectral parameters for various models, fluences, and peak fluxes for all the bursts, as well as a detailed temporal analysis for SGR J1550-5418 bursts. Finally, we suggest that some of the bursts of unknown origin are associated with the newly discovered magnetar 3XMM J185246.6+0033.7.
Wang, Shan-shan; Wei, Chun-ling; Liu, Zhi-qiang; Ren, Wei
2011-02-25
Burst firing of dopaminergic neurons in ventral tegmental area (VTA) induces a large transient increase in synaptic dopamine (DA) release and thus is considered the reward-related signal. But the mechanisms of burst generation of dopaminergic neuron still remain unclear. This experiment investigated the burst firing of VTA dopaminergic neurons in rat midbrain slices perfused with carbachol and L-glutamate individually or simultaneously to understand the neurotransmitter mechanism underlying burst generation. The results showed that bath application of carbachol (10 μmol/L) and pulse application of L-glutamate (3 mmol/L) both induced burst firing in dopaminergic neuron. Co-application of carbachol and L-glutamate induced burst firing in VTA dopaminergic cells which couldn't be induced to burst by the two chemicals separately. The result indicates that carbachol and L-glutamate co-regulate burst firing of dopaminergic neuron.
Langdon, Angela J; Breakspear, Michael; Coombes, Stephen
2012-12-01
The minimal integrate-and-fire-or-burst neuron model succinctly describes both tonic firing and postinhibitory rebound bursting of thalamocortical cells in the sensory relay. Networks of integrate-and-fire-or-burst (IFB) neurons with slow inhibitory synaptic interactions have been shown to support stable rhythmic states, including globally synchronous and cluster oscillations, in which network-mediated inhibition cyclically generates bursting in coherent subgroups of neurons. In this paper, we introduce a reduced IFB neuronal population model to study synchronization of inhibition-mediated oscillatory bursting states to periodic excitatory input. Using numeric methods, we demonstrate the existence and stability of 1:1 phase-locked bursting oscillations in the sinusoidally forced IFB neuronal population model. Phase locking is shown to arise when periodic excitation is sufficient to pace the onset of bursting in an IFB cluster without counteracting the inhibitory interactions necessary for burst generation. Phase-locked bursting states are thus found to destabilize when periodic excitation increases in strength or frequency. Further study of the IFB neuronal population model with pulse-like periodic excitatory input illustrates that this synchronization mechanism generalizes to a broad range of n:m phase-locked bursting states across both globally synchronous and clustered oscillatory regimes.
NASA Astrophysics Data System (ADS)
Melnik, V. N.; Konovalenko, A. A.; Rucker, H. O.; Brazhenko, A. I.; Briand, C.; Dorovskyy, V. V.; Zarka, P.; Denis, L.; Bulatzen, V. G.; Frantzusenko, A. V.; Stanislavskyy, A. A.
2012-04-01
From 25 June till 12 August 2011 sporadic solar radio emission was observed simultaneously by three separate radio telescopes: UTR-2 (Kharkov, Ukraine), URAN-2 (Poltava, Ukraine) and NDA (Nancay, France). During these observations several type II bursts with double and triple harmonics were registered, as well as type II bursts with complex herringbone structure. The events of particular interest were type II bursts registered on 9 and 11 August 2011. These bursts had opposite sign of circular polarization at different parts of their dynamic spectra. In our opinion we registered the emissions, which came from the different parts of the shock propagating through the solar corona. We have observed also groups of type III bursts merged into one burst, type III bursts with triple harmonics and type III bursts with "split" polarization. In addition some unusual solar bursts were registered: storms of strange narrow-band (up to 500kHz) bursts with high polarization degree (about 80%), decameter spikes of extremely short durations (200-300ms), "tadpole-like" bursts with durations of 1-2s and polarization degree up to 60%.
The Fermi-GBM Three-year X-Ray Burst Catalog
NASA Astrophysics Data System (ADS)
Jenke, P. A.; Linares, M.; Connaughton, V.; Beklen, E.; Camero-Arranz, A.; Finger, M. H.; Wilson-Hodge, C. A.
2016-08-01
The Fermi Gamma-ray Burst Monitor (GBM) is an all-sky gamma-ray monitor well known in the gamma-ray burst (GRB) community. Although GBM excels in detecting the hard, bright extragalactic GRBs, its sensitivity above 8 keV and its all-sky view make it an excellent instrument for the detection of rare, short-lived Galactic transients. In 2010 March, we initiated a systematic search for transients using GBM data. We conclude this phase of the search by presenting a three-year catalog of 1084 X-ray bursts. Using spectral analysis, location, and spatial distributions we classified the 1084 events into 752 thermonuclear X-ray bursts, 267 transient events from accretion flares and X-ray pulses, and 65 untriggered gamma-ray bursts. All thermonuclear bursts have peak blackbody temperatures broadly consistent with photospheric radius expansion (PRE) bursts. We find an average rate of 1.4 PRE bursts per day, integrated over all Galactic bursters within about 10 kpc. These include 33 and 10 bursts from the ultra-compact X-ray binaries 4U 0614+09 and 2S 0918-549, respectively. We discuss these recurrence times and estimate the total mass ejected by PRE bursts in our Galaxy.
Burst suppression probability algorithms: state-space methods for tracking EEG burst suppression
NASA Astrophysics Data System (ADS)
Chemali, Jessica; Ching, ShiNung; Purdon, Patrick L.; Solt, Ken; Brown, Emery N.
2013-10-01
Objective. Burst suppression is an electroencephalogram pattern in which bursts of electrical activity alternate with an isoelectric state. This pattern is commonly seen in states of severely reduced brain activity such as profound general anesthesia, anoxic brain injuries, hypothermia and certain developmental disorders. Devising accurate, reliable ways to quantify burst suppression is an important clinical and research problem. Although thresholding and segmentation algorithms readily identify burst suppression periods, analysis algorithms require long intervals of data to characterize burst suppression at a given time and provide no framework for statistical inference. Approach. We introduce the concept of the burst suppression probability (BSP) to define the brain's instantaneous propensity of being in the suppressed state. To conduct dynamic analyses of burst suppression we propose a state-space model in which the observation process is a binomial model and the state equation is a Gaussian random walk. We estimate the model using an approximate expectation maximization algorithm and illustrate its application in the analysis of rodent burst suppression recordings under general anesthesia and a patient during induction of controlled hypothermia. Main result. The BSP algorithms track burst suppression on a second-to-second time scale, and make possible formal statistical comparisons of burst suppression at different times. Significance. The state-space approach suggests a principled and informative way to analyze burst suppression that can be used to monitor, and eventually to control, the brain states of patients in the operating room and in the intensive care unit.
NASA Astrophysics Data System (ADS)
Rimskaya-Korsavkova, L. K.
2017-07-01
To find the possible reasons for the midlevel elevation of the Weber fraction in intensity discrimination of a tone burst, a comparison was performed for the complementary distributions of spike activity of an ensemble of space nerves, such as the distribution of time instants when spikes occur, the distribution of interspike intervals, and the autocorrelation function. The distribution properties were detected in a poststimulus histogram, an interspike interval histogram, and an autocorrelation histogram—all obtained from the reaction of an ensemble of model space nerves in response to an auditory noise burst-useful tone burst complex. Two configurations were used: in the first, the peak amplitude of the tone burst was varied and the noise amplitude was fixed; in the other, the tone burst amplitude was fixed and the noise amplitude was varied. Noise could precede or follow the tone burst. The noise and tone burst durations, as well as the interval between them, was 4 kHz and corresponded to the characteristic frequencies of the model space nerves. The profiles of all the mentioned histograms had two maxima. The values and the positions of the maxima in the poststimulus histogram corresponded to the amplitudes and mutual time position of the noise and the tone burst. The maximum that occurred in response to the tone burst action could be a basis for the formation of the loudness of the latter (explicit loudness). However, the positions of the maxima in the other two histograms did not depend on the positions of tone bursts and noise in the combinations. The first maximum fell in short intervals and united intervals corresponding to the noise and tone burst durations. The second maximum fell in intervals corresponding to a tone burst delay with respect to noise, and its value was proportional to the noise amplitude or tone burst amplitude that was smaller in the complex. An increase in tone burst or noise amplitudes was caused by nonlinear variations in the two maxima and the ratio between them. The size of the first maximum in the of interspike interval distribution could be the basis for the formation of the loudness of the masked tone burst (implicit loudness), and the size of the second maximum, for the formation of intensity in the periodicity pitch of the complex. The auditory effect of the midlevel enhancement of tone burst loudness could be the result of variations in the implicit tone burst loudness caused by variations in tone-burst or noise intensity. The reason for the enhancement of the Weber fraction could be competitive interaction between such subjective qualities as explicit and implicit tone-burst loudness and the intensity of the periodicity pitch of the complex.
Orbit and sampling requirements: TRMM experience
NASA Technical Reports Server (NTRS)
North, Gerald
1993-01-01
The Tropical Rainfall Measuring Mission (TRMM) concept originated in 1984. Its overall goal is to produce datasets that can be used in the improvement of general circulation models. A primary objective is a multi-year data stream of monthly averages of rain rate over 500 km boxes over the tropical oceans. Vertical distributions of the hydrometers, related to latent heat profiles, and the diurnal cycle of rainrates are secondary products believed to be accessible. The mission is sponsored jointly by the U.S. and Japan. TRMM is an approved mission with launch set for 1997. There are many retrieval and ground truth issues still being studied for TRMM, but here we concentrate on sampling since it is the single largest term in the error budget. The TRMM orbit plane is inclined by 35 degrees to the equator, which leads to a precession of the visits to a given grid box through the local hours of the day, requiring three to six weeks to complete the diurnal cycle, depending on latitude. For sampling studies we can consider the swath width to be about 700 km.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
Using Arduino microcontroller boards to measure response latencies.
Schubert, Thomas W; D'Ausilio, Alessandro; Canto, Rosario
2013-12-01
Latencies of buttonpresses are a staple of cognitive science paradigms. Often keyboards are employed to collect buttonpresses, but their imprecision and variability decreases test power and increases the risk of false positives. Response boxes and data acquisition cards are precise, but expensive and inflexible, alternatives. We propose using open-source Arduino microcontroller boards as an inexpensive and flexible alternative. These boards connect to standard experimental software using a USB connection and a virtual serial port, or by emulating a keyboard. In our solution, an Arduino measures response latencies after being signaled the start of a trial, and communicates the latency and response back to the PC over a USB connection. We demonstrated the reliability, robustness, and precision of this communication in six studies. Test measures confirmed that the error added to the measurement had an SD of less than 1 ms. Alternatively, emulation of a keyboard results in similarly precise measurement. The Arduino performs as well as a serial response box, and better than a keyboard. In addition, our setup allows for the flexible integration of other sensors, and even actuators, to extend the cognitive science toolbox.
Interpretable inference on the mixed effect model with the Box-Cox transformation.
Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M
2017-07-10
We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
The rotational phase dependence of magnetar bursts
NASA Astrophysics Data System (ADS)
Elenbaas, C.; Watts, A. L.; Huppenkothen, D.
2018-05-01
The trigger for the short bursts observed in γ-rays from many magnetar sources remains unknown. One particular open question in this context is the localization of burst emission to a singular active region or a larger area across the neutron star. While several observational studies have attempted to investigate this question by looking at the phase dependence of burst properties, results have been mixed. At the same time, it is not obvious a priori that bursts from a localized active region would actually give rise to a detectable phase dependence, taking into account issues such as geometry, relativistic effects, and intrinsic burst properties such brightness and duration. In this paper, we build a simple theoretical model to investigate the circumstances under which the latter effects could affect detectability of dependence of burst emission on rotational phase. We find that even for strongly phase-dependent emission, inferred burst properties may not show a rotational phase dependence, depending on the geometry of the system and the observer. Furthermore, the observed properties of bursts with durations short as 10-20 per cent of the spin period can vary strongly depending on the rotational phase at which the burst was emitted. We also show that detectability of a rotational phase dependence depends strongly on the minimum number of bursts observed, and find that existing burst samples may simply be too small to rule out a phase dependence.
X-Ray Bursts from the Transient Magnetar Candidate XTE J1810-197
NASA Technical Reports Server (NTRS)
Kouveliotou, Chryssa; Woods, Peter M.; Gavriil, Fotis P.; Kaspi, Victoria M.; Roberts, Mallory S. E.; Ibrahim, Alaa; Markwardt, Craig B.; Swank, Jean H.; Finger, Mark H.
2005-01-01
We have discovered four X-ray bursts, recorded with the Rossi X-ray Timing Explorer Proportional Counter Array between 2003 September and 2004 April, that we show to originate from the transient magnetar candidate XTE 51810-197. The burst morphologies consist of a short spike or multiple spikes lasting approx. 1 s each followed by extended tails of emission where the pulsed flux from XTE 51810-197 is significantly higher. The burst spikes are likely correlated with the pulse maxima, having a chance probability of a random phase distribution of 0.4%. The burst spectra are best fit to a blackbody with temperatures 4-8 keV, considerably harder than the persistent X-ray emission. During the X-ray tails following these bursts, the temperature rapidly cools as the flux declines, maintaining a constant emitting radius after the initial burst peak. The temporal and spectral characteristics of these bursts closely resemble the bursts seen from 1E 1048.1-5937 and a subset of the bursts detected from 1E 2259+586, thus establishing XTE J1810-197 as a magnetar candidate. The bursts detected from these three objects are sufficiently similar to one another, yet si,g&cantly differe2t from those seen from soft gamma repeaters, that they likely represent a new class of bursts from magnetar candidates exclusive (thus far) to the anomalous X-ray pulsar-like sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Rock bursts. 57.3461 Section 57.3461 Mineral...-Underground Only § 57.3461 Rock bursts. (a) Operators of mines which have experienced a rock burst shall— (1) Within twenty four hours report to the nearest MSHA office each rock burst which: (i) Causes persons to...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Rock bursts. 57.3461 Section 57.3461 Mineral...-Underground Only § 57.3461 Rock bursts. (a) Operators of mines which have experienced a rock burst shall— (1) Within twenty four hours report to the nearest MSHA office each rock burst which: (i) Causes persons to...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Rock bursts. 57.3461 Section 57.3461 Mineral...-Underground Only § 57.3461 Rock bursts. (a) Operators of mines which have experienced a rock burst shall— (1) Within twenty four hours report to the nearest MSHA office each rock burst which: (i) Causes persons to...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Rock bursts. 57.3461 Section 57.3461 Mineral...-Underground Only § 57.3461 Rock bursts. (a) Operators of mines which have experienced a rock burst shall— (1) Within twenty four hours report to the nearest MSHA office each rock burst which: (i) Causes persons to...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Rock bursts. 57.3461 Section 57.3461 Mineral...-Underground Only § 57.3461 Rock bursts. (a) Operators of mines which have experienced a rock burst shall— (1) Within twenty four hours report to the nearest MSHA office each rock burst which: (i) Causes persons to...
A Nontriggered Burst Supplement to the BATSE Gamma-Ray Burst Catalogs
NASA Technical Reports Server (NTRS)
Kommers, Jefferson M.; Lewin, Walter H. G.; Kouveliotou, Chryssa; vanParadijs, Jan; Pendleton, Geoffrey N.; Meegan, Charles A.; Fishman, Gerald J.
2001-01-01
The Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory detects gamma-ray bursts (GRBs) with a real-time burst detection (or "trigger") system running onboard the spacecraft. Under some circumstances, however, a GRB may not activate the on-board burst trigger. For example, the burst may be too faint to exceed the on-board detection threshold, or it may occur while the on-board burst trigger is disabled for technical reasons. This paper describes a catalog of 873 "nontriggered" GRBs that were detected in a search of the archival continuous data from BATSE recorded between 1991 December 9.0 and 1997 December 17.0. For each burst, the catalog gives an estimated source direction, duration, peak flux, and fluence. Similar data are presented for 50 additional bursts of unknown origin that were detected in the 25-50 keV range; these events may represent the low-energy "tail" of the GRB spectral distribution. This catalog increases the number of GRBs detected with BATSE by 48% during the time period covered by the search.
Different Types of X-Ray Bursts from GRS 1915+105 and Their Origin
NASA Astrophysics Data System (ADS)
Yadav, J. S.; Rao, A. R.; Agrawal, P. C.; Paul, B.; Seetha, S.; Kasturirangan, K.
1999-06-01
We report X-ray observations of the Galactic X-ray transient source GRS 1915+105 with the pointed proportional counters of the Indian X-ray Astronomy Experiment (IXAE) onboard the Indian satellite IRS-P3, which show remarkable richness in temporal variability. The observations were carried out on 1997 June 12-29 and August 7-10, in the energy range of 2-18 keV and revealed the presence of very intense X-ray bursts. All the observed bursts have a slow exponential rise, a sharp linear decay, and broadly can be put in two classes: irregular and quasi-regular bursts in one class, and regular bursts in the other. The regular bursts are found to have two distinct timescales and to persist over extended durations. There is a strong correlation between the preceding quiescent time and the burst duration for the quasi-regular and irregular bursts. No such correlation is found for the regular bursts. The ratio of average flux during the burst time to the average flux during the quiescent phase is high and variable for the quasi-regular and irregular bursts, while it is low and constant for the regular bursts. We present a comprehensive picture of the various types of bursts observed in GRS 1915+105 in the light of the recent theories of advective accretion disks. We suggest that the peculiar bursts that we have seen are characteristic of the change of state of the source. The source can switch back and forth between the low-hard state and the high-soft state near critical accretion rates in a very short timescale, giving rise to the irregular and quasi-regular bursts. The fast timescale for the transition of the state is explained by invoking the appearance and disappearance of the advective disk in its viscous timescale. The periodicity of the regular bursts is explained by matching the viscous timescale with the cooling timescale of the postshock region. A test of the model is presented using the publicly available 13-60 keV RXTE/PCA data for irregular and regular bursts concurrent with our observations. It is found that the 13-60 keV flux relative to the 2-13 keV flux shows clear evidence for state change between the quiescent phase and the burst phase. The value of this ratio during burst is consistent with the values observed during the high-soft state seen on 1997 August 19, while its value during quiescent phase is consistent with the values observed during the low-hard state seen on 1997 May 8.
Dietrich, Susanne; Hertrich, Ingo; Müller-Dahlhaus, Florian; Ackermann, Hermann; Belardinelli, Paolo; Desideri, Debora; Seibold, Verena C; Ziemann, Ulf
2018-01-01
The pre-supplementary motor area (pre-SMA) is engaged in speech comprehension under difficult circumstances such as poor acoustic signal quality or time-critical conditions. Previous studies found that left pre-SMA is activated when subjects listen to accelerated speech. Here, the functional role of pre-SMA was tested for accelerated speech comprehension by inducing a transient "virtual lesion" using continuous theta-burst stimulation (cTBS). Participants were tested (1) prior to (pre-baseline), (2) 10 min after (test condition for the cTBS effect), and (3) 60 min after stimulation (post-baseline) using a sentence repetition task (formant-synthesized at rates of 8, 10, 12, 14, and 16 syllables/s). Speech comprehension was quantified by the percentage of correctly reproduced speech material. For high speech rates, subjects showed decreased performance after cTBS of pre-SMA. Regarding the error pattern, the number of incorrect words without any semantic or phonological similarity to the target context increased, while related words decreased. Thus, the transient impairment of pre-SMA seems to affect its inhibitory function that normally eliminates erroneous speech material prior to speaking or, in case of perception, prior to encoding into a semantically/pragmatically meaningful message.
Tosun, Tuğçe; Berkay, Dilara; Sack, Alexander T; Çakmak, Yusuf Ö; Balcı, Fuat
2017-08-01
Decisions are made based on the integration of available evidence. The noise in evidence accumulation leads to a particular speed-accuracy tradeoff in decision-making, which can be modulated and optimized by adaptive decision threshold setting. Given the effect of pre-SMA activity on striatal excitability, we hypothesized that the inhibition of pre-SMA would lead to higher decision thresholds and an increased accuracy bias. We used offline continuous theta burst stimulation to assess the effect of transient inhibition of the right pre-SMA on the decision processes in a free-response two-alternative forced-choice task within the drift diffusion model framework. Participants became more cautious and set higher decision thresholds following right pre-SMA inhibition compared with inhibition of the control site (vertex). Increased decision thresholds were accompanied by an accuracy bias with no effects on post-error choice behavior. Participants also exhibited higher drift rates as a result of pre-SMA inhibition compared with the vertex inhibition. These results, in line with the striatal theory of speed-accuracy tradeoff, provide evidence for the functional role of pre-SMA activity in decision threshold modulation. Our results also suggest that pre-SMA might be a part of the brain network associated with the sensory evidence integration.
Ott, Derek V M; Ullsperger, Markus; Jocham, Gerhard; Neumann, Jane; Klein, Tilmann A
2011-07-15
The prefrontal cortex is known to play a key role in higher-order cognitive functions. Recently, we showed that this brain region is active in reinforcement learning, during which subjects constantly have to integrate trial outcomes in order to optimize performance. To further elucidate the role of the dorsolateral prefrontal cortex (DLPFC) in reinforcement learning, we applied continuous theta-burst stimulation (cTBS) either to the left or right DLPFC, or to the vertex as a control region, respectively, prior to the performance of a probabilistic learning task in an fMRI environment. While there was no influence of cTBS on learning performance per se, we observed a stimulation-dependent modulation of reward vs. punishment sensitivity: Left-hemispherical DLPFC stimulation led to a more reward-guided performance, while right-hemispherical cTBS induced a more avoidance-guided behavior. FMRI results showed enhanced prediction error coding in the ventral striatum in subjects stimulated over the left as compared to the right DLPFC. Both behavioral and imaging results are in line with recent findings that left, but not right-hemispherical stimulation can trigger a release of dopamine in the ventral striatum, which has been suggested to increase the relative impact of rewards rather than punishment on behavior. Copyright © 2011 Elsevier Inc. All rights reserved.
Sporadic frame dropping impact on quality perception
NASA Astrophysics Data System (ADS)
Pastrana-Vidal, Ricardo R.; Gicquel, Jean Charles; Colomes, Catherine; Cherifi, Hocine
2004-06-01
Over the past few years there has been an increasing interest in real time video services over packet networks. When considering quality, it is essential to quantify user perception of the received sequence. Severe motion discontinuities are one of the most common degradations in video streaming. The end-user perceives a jerky motion when the discontinuities are uniformly distributed over time and an instantaneous fluidity break is perceived when the motion loss is isolated or irregularly distributed. Bit rate adaptation techniques, transmission errors in the packet networks or restitution strategy could be the origin of this perceived jerkiness. In this paper we present a psychovisual experiment performed to quantify the effect of sporadically dropped pictures on the overall perceived quality. First, the perceptual detection thresholds of generated temporal discontinuities were measured. Then, the quality function was estimated in relation to a single frame dropping for different durations. Finally, a set of tests was performed to quantify the effect of several impairments distributed over time. We have found that the detection thresholds are content, duration and motion dependent. The assessment results show how quality is impaired by a single burst of dropped frames in a 10 sec sequence. The effect of several bursts of discarded frames, irregularly distributed over the time is also discussed.
Dietrich, Susanne; Hertrich, Ingo; Müller-Dahlhaus, Florian; Ackermann, Hermann; Belardinelli, Paolo; Desideri, Debora; Seibold, Verena C.; Ziemann, Ulf
2018-01-01
The pre-supplementary motor area (pre-SMA) is engaged in speech comprehension under difficult circumstances such as poor acoustic signal quality or time-critical conditions. Previous studies found that left pre-SMA is activated when subjects listen to accelerated speech. Here, the functional role of pre-SMA was tested for accelerated speech comprehension by inducing a transient “virtual lesion” using continuous theta-burst stimulation (cTBS). Participants were tested (1) prior to (pre-baseline), (2) 10 min after (test condition for the cTBS effect), and (3) 60 min after stimulation (post-baseline) using a sentence repetition task (formant-synthesized at rates of 8, 10, 12, 14, and 16 syllables/s). Speech comprehension was quantified by the percentage of correctly reproduced speech material. For high speech rates, subjects showed decreased performance after cTBS of pre-SMA. Regarding the error pattern, the number of incorrect words without any semantic or phonological similarity to the target context increased, while related words decreased. Thus, the transient impairment of pre-SMA seems to affect its inhibitory function that normally eliminates erroneous speech material prior to speaking or, in case of perception, prior to encoding into a semantically/pragmatically meaningful message. PMID:29896086
FRB121102 Bursts Show Detailed Spectrotemporal Structure
NASA Astrophysics Data System (ADS)
Hessels, Jason; Seymour, Andrew; Spitler, Laura; Michilli, Daniele; Lynch, Ryan S.; Gajjar, Vishal; Gourdji, Kelly
2018-01-01
FRB121102 is a sporadic emitter of millisecond-duration radio bursts, and is associated with a compact, persistent radio source in the primary star-forming region of a dwarf galaxy at ~ 1 Gpc. Key to understanding FRB121102's physical nature is using the observed burst properties to elucidate the underlying emission mechanism and its local environment. Here we present a sample of high signal-to-noise bursts that reveal hitherto unseen spectrotemporal features. We find that the bursts are often composed of sub-bursts with finite bandwidths, and characteristic frequencies that drift downwards during the burst. While this behavior could be an intrinsic feature of the burst emission mechanism, we also discuss an interpretation in terms of plasma lensing in the source environment, similar to the pulse echoes sometimes seen from the Crab pulsar.
A Search for EGRET/Radio Pulsars in the ETA Carina Region
NASA Technical Reports Server (NTRS)
2002-01-01
Our analysis of EGRET data for the radio pulsar PSR B1046-58, which lies it the Eta Carina region of the Galaxy, was highly successful, resulting in the discovery of strong evidence for gamma-ray pulsations from this source. This work was published in the Astrophysical Journal. Additional support for the association was published in a companion paper in which an analysis of the X-ray counterpart to PSR B1046-58 was done, and we showed that it was the only possible counterpart to the gamma ray source within the EGRET error box.
Neural network based short-term load forecasting using weather compensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, T.W.S.; Leung, C.T.
This paper presents a novel technique for electric load forecasting based on neural weather compensation. The proposed method is a nonlinear generalization of Box and Jenkins approach for nonstationary time-series prediction. A weather compensation neural network is implemented for one-day ahead electric load forecasting. The weather compensation neural network can accurately predict the change of actual electric load consumption from the previous day. The results, based on Hong Kong Island historical load demand, indicate that this methodology is capable of providing a more accurate load forecast with a 0.9% reduction in forecast error.
Analysis of type II and type III solar radio bursts
NASA Astrophysics Data System (ADS)
Wijesekera, J. V.; Jayaratne, K. P. S. C.; Adassuriya, J.
2018-04-01
Solar radio burst is an arrangement of a frequency space that variation with time. Most of radio burst can be identified in low frequency range such as below 200 MHz and depending on frequencies. Solar radio bursts were the first phenomenon identified in the field of radio astronomy field. Solar radio frequency range is from 70 MHz to 2.2 GHz. Most of the radio burst can be identified in a low frequency range such as below 200 MHz. Properties of low-frequency radio were analyzed this research. There are two types of solar radio bursts were analyzed, named as type II and type III radio bursts. Exponential decay type could be seen in type II, and a linear could be indicated in type III solar radio bursts. The results of the drift rate graphs show the values of each chosen solar radio burst. High drift rate values can be seen in type III solar flares whereas low to medium drift rate values can be seen in type II solar flares. In the second part of the research the Newkirk model electron density model was used to estimate the drift velocities of the solar radio bursts. Although the special origin of the solar radio burst is not known clearly we assumed. The chosen solar radio bursts were originated within the solar radius of 0.9 - 1.3 range from the photosphere. We used power low in the form of (x) = A × 10‑bx were that the electron density related to the height of the solar atmosphere. The calculation of the plasma velocity of each solar radio burst was done using the electron density model and drift rates. Therefore velocity of chosen type II solar radio bursts indicates low velocities. The values are 233.2499 Km s‑1, 815.9522 Km s‑1 and 369.5425 Km s‑1. Velocity of chosen type III solar radio bursts were 1443.058 Km s‑1and 1205.05Km s ‑1.
Transitions to Synchrony in Coupled Bursting Neurons
NASA Astrophysics Data System (ADS)
Dhamala, Mukeshwar; Jirsa, Viktor K.; Ding, Mingzhou
2004-01-01
Certain cells in the brain, for example, thalamic neurons during sleep, show spike-burst activity. We study such spike-burst neural activity and the transitions to a synchronized state using a model of coupled bursting neurons. In an electrically coupled network, we show that the increase of coupling strength increases incoherence first and then induces two different transitions to synchronized states, one associated with bursts and the other with spikes. These sequential transitions to synchronized states are determined by the zero crossings of the maximum transverse Lyapunov exponents. These results suggest that synchronization of spike-burst activity is a multi-time-scale phenomenon and burst synchrony is a precursor to spike synchrony.
NASA Technical Reports Server (NTRS)
Gavriil, Fotis P.; Kaspi, Victoria M.; Woods, Peter M.; Lyutikov, Maxim
2005-01-01
We report on the latest X-ray burst detected from the direction of the Anomalous X-ray Pulsar (AXP) 1E 1048.1-5937 using the Rossi X-ray Timing Explorer (RXTE). Following the burst the AXP was observed further with RXTE, XMM-Newton and Chandra. We find a simultaneous increase of approx. 3.7 times the quiescent value (approx. 5 sigma) in the pulsed component of the pulsar's flux during the tail of the burst which identifies the AXP as the burst's origin. The burst was overall very similar to the two others reported from this source in 2001. The unambiguous identification of 1E 1048.1-5937 as the burster here suggests it was in 2001 as well. Pre- and post-burst observations revealed no change in the total flux or spectrum of the quiescent emission. Comparing all three bursts detected thus far from this source we find that this event was the most fluent (170+/-42 x 10(exp -10) erg cm-2), had the highest peak flux (71+/-16 x 10(exp -10) erg/s/sq cm), the longest duration (approx. 411 s). The epoch of the burst peak was consistent with the arrival time of 1E 1048.1-5937's pulse peak. The burst exhibited significant spectral evolution with the trend going from hard to soft. Although the average spectrum of the burst was comparable in hardness (Gamma approx. 1) to those of the 2001 bursts, the peak of this burst was much harder (Gamma approx. 0.5).
Parker, Michael G; Broughton, Alex J; Larsen, Ben R; Dinius, Josh W; Cimbura, Mac J; Davis, Matthew
2011-12-01
The purpose of this study was to compare electrically induced contraction levels produced by three patterns of alternating current in fatigued and nonfatigued skeletal muscles. Eighteen male volunteers without health conditions, with a mean (SD) age of 24.9 (3.4) yrs were randomly exposed to a fatiguing volitional isometric quadriceps contraction and one of three patterns of 2.5-KHz alternating current; two were modulated at 50 bursts per second (10% burst duty cycle with five cycles per burst and 90% burst duty cycle with 45 cycles per burst), and one pattern was modulated at 100 bursts per second (10% burst duty cycle with 2.5 cycles per burst). The electrically induced contraction levels produced by the three patterns of electrical stimulation were compared before and after the fatiguing contraction. The 10% burst duty cycles produced 42.9% (95% confidence interval, 29.1%-56.7%) and 32.1% (95% confidence interval, 18.2%-45.9%) more muscle force (P < 0.001) than did the 90% burst duty cycle pattern. There was no significant interaction effect (P = 0.392) of electrical stimulation patterns and fatigue on the electrically induced contraction levels. The lower burst duty cycle (10%) patterns of electrical stimulation produced stronger muscle contractions. Furthermore, the stimulation patterns had no influence on the difference in muscle force before and after the fatiguing quadriceps contraction. Consequently, for clinical applications in which high forces are desired, the patterns using the 10% burst duty cycle may be helpful.
Morovati, Amirhosein; Ghaffari, Alireza; Erfani Jabarian, Lale; Mehramizi, Ali
2017-01-01
Guaifenesin, a highly water-soluble active (50 mg/mL), classified as a BCS class I drug. Owing to its poor flowability and compressibility, formulating tablets especially high-dose one, may be a challenge. Direct compression may not be feasible. Bilayer tablet technology applied to Mucinex®, endures challenges to deliver a robust formulation. To overcome challenges involved in bilayer-tablet manufacturing and powder compressibility, an optimized single layer tablet prepared by a binary mixture (Two-in-one), mimicking the dual drug release character of Mucinex ® was purposed. A 3-factor, 3-level Box-Behnken design was applied to optimize seven considered dependent variables (Release "%" in 1, 2, 4, 6, 8, 10 and 12 h) regarding different levels of independent one (X 1 : Cetyl alcohol, X 2 : Starch 1500 ® , X 3 : HPMC K100M amounts). Two granule portions were prepared using melt and wet granulations, blended together prior to compression. An optimum formulation was obtained (X 1 : 37.10, X 2 : 2, X 3 : 42.49 mg). Desirability function was 0.616. F2 and f1 between release profiles of Mucinex® and the optimum formulation were 74 and 3, respectively. An n-value of about 0.5 for both optimum and Mucinex® formulations showed diffusion (Fickian) control mechanism. However, HPMC K100M rise in 70 mg accompanied cetyl alcohol rise in 60 mg led to first order kinetic (n = 0.6962). The K values of 1.56 represented an identical burst drug releases. Cetyl alcohol and starch 1500 ® modulated guaifenesin release from HPMC K100M matrices, while due to their binding properties, improved its poor flowability and compressibility, too.
Morovati, Amirhosein; Ghaffari, Alireza; Erfani jabarian, Lale; Mehramizi, Ali
2017-01-01
Guaifenesin, a highly water-soluble active (50 mg/mL), classified as a BCS class I drug. Owing to its poor flowability and compressibility, formulating tablets especially high-dose one, may be a challenge. Direct compression may not be feasible. Bilayer tablet technology applied to Mucinex®, endures challenges to deliver a robust formulation. To overcome challenges involved in bilayer-tablet manufacturing and powder compressibility, an optimized single layer tablet prepared by a binary mixture (Two-in-one), mimicking the dual drug release character of Mucinex® was purposed. A 3-factor, 3-level Box-Behnken design was applied to optimize seven considered dependent variables (Release “%” in 1, 2, 4, 6, 8, 10 and 12 h) regarding different levels of independent one (X1: Cetyl alcohol, X2: Starch 1500®, X3: HPMC K100M amounts). Two granule portions were prepared using melt and wet granulations, blended together prior to compression. An optimum formulation was obtained (X1: 37.10, X2: 2, X3: 42.49 mg). Desirability function was 0.616. F2 and f1 between release profiles of Mucinex® and the optimum formulation were 74 and 3, respectively. An n-value of about 0.5 for both optimum and Mucinex® formulations showed diffusion (Fickian) control mechanism. However, HPMC K100M rise in 70 mg accompanied cetyl alcohol rise in 60 mg led to first order kinetic (n = 0.6962). The K values of 1.56 represented an identical burst drug releases. Cetyl alcohol and starch 1500® modulated guaifenesin release from HPMC K100M matrices, while due to their binding properties, improved its poor flowability and compressibility, too. PMID:29552045
das Neves, José; Sarmento, Bruno
2015-05-01
Polymeric nanoparticles (NPs) have the potential to provide effective and safe delivery of antiretroviral drugs in the context of prophylactic anti-HIV vaginal microbicides. Dapivirine-loaded poly(d,l-lactic-co-glycolic acid) (PLGA) NPs were produced by an emulsion-solvent evaporation method, optimized for colloidal properties using a 3-factor, 3-level Box-Behnken experimental design, and characterized for drug loading, production yield, morphology, thermal behavior, drug release, in vitro cellular uptake, cytotoxicity and pro-inflammatory potential. Also, drug permeability/membrane retention in well-established HEC-1-A and CaSki cell monolayer models as mediated by NPs was assessed in the absence or presence of mucin. Box-Behnken design allowed optimizing monodisperse 170nm drug-loaded NPs. Drug release experiments showed an initial burst effect up to 4h, followed by sustained 24h release at pH 4.2 and 7.4. NPs were readily taken up by different genital and macrophage cell lines as assessed by fluorescence microscopy. Drug-loaded NPs presented lower or at least similar cytotoxicity as compared to the free drug, with up to around one-log increase in half-maximal cytotoxic concentration values. In all cases, no relevant changes in cell pro-inflammatory cytokine/chemokine production were observed. Dapivirine transport across cell monolayers was significantly decreased when mucin was present at the donor side with either NPs or the free drug, thus evidencing the influence of this natural glycoprotein in membrane permeability. Moreover, drug retention in cell monolayers was significantly higher for NPs in comparison with the free drug. Overall, obtained dapivirine-loaded PLGA NPs possess interesting technological and biological features that may contribute to their use as novel safe and effective vaginal microbicides. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
A type IV burst associated with a coronal streamer disruption event
NASA Technical Reports Server (NTRS)
Kundu, M. R.
1987-01-01
A type IV burst was observed on February 17, 1985 with the Clark Lake Radio Observatory multifrequency radioheliograph operating in the frequency range 20-125 MHz. This burst was associated with a coronal streamer disruption event. From two-dimensional images produced at 50 MHz, evidence of a type II burst and a slow moving type IV burst are shown. The observations of the moving type IV burst suggests that a plasmoid containing energetic electrons can result from the disruption of a coronal streamer.
The Mystery of Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Fishman, Gerald J.
2004-01-01
Gamma-ray bursts remain one of the greatest mysteries in astrophysics. Observations of gamma-ray bursts made by the BATSE experiment on the Compton Gamma-Ray Observatory will be described. Most workers in the field now believe that they originate from cosmological distances. This view has been reinforced by observations this year of several optical afterglow counterparts to gamma-ray bursts. A summary of these recent discoveries will be presented, along with their implications for models of the burst emission mechanism and the energy source of the bursts.
BATSE Observations of Gamma-Ray Burst Tails
NASA Technical Reports Server (NTRS)
Connaughton, Valerie; Six, N. Frank (Technical Monitor)
2001-01-01
With the discovery of low-energy radiation appearing to come from the site of gamma-ray bursts in the hours to weeks after the initial burst of gamma rays, it would appear that astronomers have seen a cosmological imprint made by the burster on its surroundings. I discuss in this paper the phenomenon of post-burst emission in BATSE (Burst and Transient Source Experiment) gamma-ray bursts at energies traditionally associated with prompt emission. By summing the background-subtracted signals from hundreds of bursts, I find that tails out to hundreds of seconds after the trigger may be a common feature of long events (duration greater than 2s), and perhaps of the shorter bursts at a lower and shorter-lived level. The tail component appears independent of both the duration (within the long GRB sample) and brightness of the prompt burst emission, and may be softer. Some individual bursts have visible tails at gamma-ray energies and the spectrum in at least a few cases is different from that of the prompt emission. Afterglow at lower energies was detected for one of these bursts, GRB-991216, raising the possibility of afterglow observations over large energy ranges using the next generation of GRB detectors in conjunction with sensitive space or ground-based telescopes.
Observations of short gamma-ray bursts.
Fox, Derek B; Roming, Peter W A
2007-05-15
We review recent observations of short-hard gamma-ray bursts and their afterglows. The launch and successful ongoing operations of the Swift satellite, along with several localizations from the High-Energy Transient Explorer mission, have provoked a revolution in short-burst studies: first, by quickly providing high-quality positions to observers; and second, via rapid and sustained observations from the Swift satellite itself. We make a complete accounting of Swift-era short-burst localizations and proposed host galaxies, and discuss the implications of these observations for the distances, energetics and environments of short bursts, and the nature of their progenitors. We then review the physical modelling of short-burst afterglows: while the simplest afterglow models are inadequate to explain the observations, there have been several notable successes. Finally, we address the case of an unusual burst that threatens to upset the simple picture in which long bursts are due to the deaths of massive stars, and short bursts to compact-object merger events.
Excess close burst pairs in FRB 121102
NASA Astrophysics Data System (ADS)
Katz, J. I.
2018-05-01
The repeating FRB 121102 emitted a pair of apparently discrete bursts separated by 37 ms and another pair, 131 d later, separated by 34 ms, during observations that detected bursts at a mean rate of ˜2 × 10-4 s-1. While FRB 121102 is known to produce multipeaked bursts, here I assume that these `burst pairs' are truly separate bursts and not multicomponent single bursts, and consider the implications of that assumption. Their statistics are then non-Poissonian. Assuming that the emission comes from a narrow range of rotational phase, then the measured burst intervals constrain any possible periodic modulation underlying the highly episodic emission. If more such short intervals are measured a period may be determined or periodicity may be excluded. The excess of burst intervals much shorter than their mean recurrence time may be explained if FRB emit steady but narrow beams that execute a random walk in direction, perhaps indicating origin in a black hole's accretion disc.
The Short Bursts in SGR 1806-20, 1E 1048-5937, and SGR 0501+4516
NASA Astrophysics Data System (ADS)
Qu, Zhijie; Li, Zhaosheng; Chen, Yupeng; Dai, Shi; Ji, Long; Xu, Renxin; Zhang, Shu
2015-03-01
We analyzed temporal and spectral properties, focusing on the short bursts, for three anomalous X-ray pulsars (AXPs) and soft gamma repeaters (SGRs), including SGR 1806-20, 1E 1048-5937 and SGR 0501+4516. Using the data from XMM-Newton, we located the short bursts by Bayesian blocks algorithm. The short bursts' duration distributions for three sources were fitted by two lognormal functions. The spectra of shorter bursts ($< 0.2~\\rm s$) and longer bursts ($\\geq 0.2~\\rm s$) can be well fitted in two blackbody components model or optically thin thermal bremsstrahlung model for SGR 0501+4516. We also found that there is a positive correlation between the burst luminosity and the persistent luminosity with a power law index $\\gamma = 1.23 \\pm 0.18 $. The energy ratio of this persistent emission to the time averaged short bursts is in the range of $10 - 10^3$, being comparable to the case in Type I X-ray burst.
Endogenous GABA and Glutamate Finely Tune the Bursting of Olfactory Bulb External Tufted Cells
Hayar, Abdallah; Ennis, Matthew
2008-01-01
In rat olfactory bulb slices, external tufted (ET) cells spontaneously generate spike bursts. Although ET cell bursting is intrinsically generated, its strength and precise timing may be regulated by synaptic input. We tested this hypothesis by analyzing whether the burst properties are modulated by activation of ionotropic γ-aminobutyric acid (GABA) and glutamate receptors. Blocking GABAA receptors increased—whereas blocking ionotropic glutamate receptors decreased—the number of spikes/burst without changing the interburst frequency. The GABAA agonist (isoguvacine, 10 μM) completely inhibited bursting or reduced the number of spikes/burst, suggesting a shunting effect. These findings indicate that the properties of ET cell spontaneous bursting are differentially controlled by GABAergic and glutamatergic fast synaptic transmission. We suggest that ET cell excitatory and inhibitory inputs may be encoded as a change in the pattern of spike bursting in ET cells, which together with mitral/tufted cells constitute the output circuit of the olfactory bulb. PMID:17567771
Endogenous GABA and glutamate finely tune the bursting of olfactory bulb external tufted cells.
Hayar, Abdallah; Ennis, Matthew
2007-08-01
In rat olfactory bulb slices, external tufted (ET) cells spontaneously generate spike bursts. Although ET cell bursting is intrinsically generated, its strength and precise timing may be regulated by synaptic input. We tested this hypothesis by analyzing whether the burst properties are modulated by activation of ionotropic gamma-aminobutyric acid (GABA) and glutamate receptors. Blocking GABA(A) receptors increased--whereas blocking ionotropic glutamate receptors decreased--the number of spikes/burst without changing the interburst frequency. The GABA(A) agonist (isoguvacine, 10 microM) completely inhibited bursting or reduced the number of spikes/burst, suggesting a shunting effect. These findings indicate that the properties of ET cell spontaneous bursting are differentially controlled by GABAergic and glutamatergic fast synaptic transmission. We suggest that ET cell excitatory and inhibitory inputs may be encoded as a change in the pattern of spike bursting in ET cells, which together with mitral/tufted cells constitute the output circuit of the olfactory bulb.
Frequent bursts from the 11 Hz transient pulsar IGR J17480-2446
NASA Astrophysics Data System (ADS)
Chakraborty, Manoneeta; Mukherjee, Arunava; Bhattacharyya, S.
Accreted matter falling on the surface of the neutron star in a Low Mass X-ray Binary (LMXB) system gives rise to intense X-ray bursts originating from unstable thermonuclear conflagration and these bursts can be used as a tool to constrain the equation of state. A series of such X-ray bursts along with millihertz (mHz) quasi-periodic oscillations (QPOs) at the highest source luminosities were observed during the 2010 outburst of the transient LMXB pulsar IGR J17480--2446. The quite diverse burst properties compared to typical type-I bursts suggested them to be the type-II bursts originating from accretion disc instability. We show that the bursts are indeed of thermonuclear origin and thus confirm the quasi-stable burning model for mHz QPOs. Various properties of the bursts such as, peak flux, fluence, periodicity and duration, were highly dependent on the source spectral states and their variation over a large accretion rate range revealed the evolution of the burning process at different accretion rate regimes.
Chandra Observations of the X-Ray Environs of SN 1998BW / GRB 980425
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouveliotou , C.
2004-07-14
We report X-ray studies of the environs of SN 1998bw and GRB 980425 using the Chandra X-Ray Observatory 1281 days after the GRB. Eight X-ray point sources were localized, three and .ve each in the original error boxes--S1 and S2--assigned for variable X-ray counterparts to the GRB by BeppoSAX. The sum of the discrete X-ray sources plus continuous emission in S2 observed by CXO on day 1281 is within a factor of 1.5 of the maximum and the upper limits seen by BeppoSAX. We conclude that S2 is the sum of several variable sources that have not disappeared, and thereforemore » is not associated with the GRB. Within S1, clear evidence is seen for a decline of approximately a factor of 12 between day 200 and day 1281. One of the sources in S1, S1a, is coincident with the well-determined radio location of SN 1998bw, and is certainly the remnant of that explosion. The nature of the other sources is also discussed. Combining our observation of the supernova with others of the GRB afterglow, a smooth X-ray light curve, spanning {approx} 1300 days, is obtained by assuming the burst and supernova were coincident at 35.6 Mpc. When this X-ray light curve is compared with those of the X-ray ''afterglows'' of ordinary GRBs, X-ray Flashes, and ordinary supernovae, evidence emerges for at least two classes of lightcurves, perhaps bounding a continuum. By three to ten years, all these phenomena seem to converge on a common X-ray luminosity, possibly indicative of the supernova underlying them all. This convergence strengthens the conclusion that SN 1998bw and GRB 980425 took place in the same object. One possible explanation for the two classes is a (nearly) standard GRB observed at different angles, in which case X-ray afterglows with intermediate luminosities should eventually be discovered. Finally, we comment on the contribution of GRBs to the ULX source population.« less
The continuum spectral characteristics of gamma-ray bursts observed by BATSE
NASA Technical Reports Server (NTRS)
Pendleton, Geoffrey N.; Paciesas, William S.; Briggs, Michael S.; Mallozzi, Robert S.; Koshut, Tom M.; Fishman, Gerald J.; Meegan, Charles A.; Wilson, Robert B.; Harmon, Alan B.; Kouveliotou, Chryssa
1994-01-01
Distributions of the continuum spectral characteristics of 260 bursts in the first Burst And Transient Source Experiement (BATSE) catalog are presented. The data are derived from flux calculated from BATSE Large Area Detector (LAD) four-channel discriminator data. The data are converted from counts to protons using a direct spectral inversion technique to remove the effects of atmospheric scattering and the energy dependence of the detector angular response. Although there are intriguing clusters of bursts in the spectral hardness ratio distributions, no evidence for the presence of distinct burst classes based in spectral hardness ratios alone is found. All subsets of bursts selected for their spectral characteristics in this analysis exhibit spatial distributions consistent with isotropy. The spectral diversity of the burst population appears to be caused largely by the highly variable nature of the burst production mechanisms themselves.
Outliers to the peak energy-isotropic energy relation in gamma-ray bursts
NASA Astrophysics Data System (ADS)
Nakar, Ehud; Piran, Tsvi
2005-06-01
The peak energy-isotropic energy (EpEi) relation is among the most intriguing recent discoveries concerning gamma-ray bursts (GRBs). It can have numerous implications for our understanding of the emission mechanism of the bursts and for the application of GRBs to cosmological studies. However, this relation has been verified only for a small sample of bursts with measured redshifts. We propose here a test of whether a burst with an unknown redshift can potentially satisfy the EpEi relation. Applying this test to a large sample of BATSE bursts, we find that a significant fraction of those bursts cannot satisfy this relation. Our test is sensitive only to dim and hard bursts, and therefore this relation might still hold as an inequality (i.e. there are no intrinsically bright and soft bursts). We conclude that the observed relation seen in the sample of bursts with known redshift might be influenced by observational biases and the inability to locate and to localize well hard and weak bursts that have only a small number of photons. In particular, we point out that the threshold for detection, localization and redshift measurement is essentially higher than the threshold for detection alone. We predict that Swift will detect some hard and weak bursts that would be outliers to the EpEi relation. However, we cannot quantify this prediction. We stress the importance of understanding the detection-localization-redshift threshold for the coming Swift detections.
NASA Astrophysics Data System (ADS)
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin
2015-12-01
Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.
NASA Technical Reports Server (NTRS)
Lung, Shun-Fat; Ko, William L.
2016-01-01
In support of the Adaptive Compliant Trailing Edge [ACTE] project at the NASA Armstrong Flight Research Center, displacement transfer functions were applied to the swept wing of a Gulfstream G-III airplane (Gulfstream Aerospace Corporation, Savannah, Georgia) to obtain deformed shape predictions. Four strainsensing lines (two on the lower surface, two on the upper surface) were used to calculate the deformed shape of the G III wing under bending and torsion. There being an insufficient number of surface strain sensors, the existing G III wing box finite element model was used to generate simulated surface strains for input to the displacement transfer functions. The resulting predicted deflections have good correlation with the finite-element generated deflections as well as the measured deflections from the ground load calibration test. The convergence study showed that the displacement prediction error at the G III wing tip can be reduced by increasing the number of strain stations (for each strain-sensing line) down to a minimum error of l.6 percent at 17 strain stations; using more than 17 strain stations yielded no benefit because the error slightly increased to 1.9% when 32 strain stations were used.
BATSE Observations of Gamma-Ray Burst Tails
NASA Technical Reports Server (NTRS)
Connaughton, Valerie
2002-01-01
With the observation of low-energy radiation coming from the site of gamma-ray bursts in the hours to weeks after the initial gamma ray burst, it appears that astronomers have discovered a cosmological imprint made by the burster on its surroundings. This paper discusses the phenomenon of postburst emission in Burst and Transient Source Experiment (BATSE) gamma-ray bursts at energies usually associated with prompt emission. After summing up the background-subtracted signals from hundreds of bursts, it is found that tails out to hundreds of seconds after the trigger could be a common feature of events of a duration greater than 2 seconds, and perhaps of the shorter bursts at a lower and shorter-lived level. The tail component may be softer and seems independent of the duration (within the long-GRB sample) and brightness of the prompt burst emission. Some individual bursts have visible tails at gamma-ray energies, and the spectrum in a few cases differs from that of the prompt emission. For one of these bursts, GRB 991216, afterglow at lower energies was detected, which raised the possibility of seeing afterglow observations over large energy ranges using the next generation of GRB detectors in addition to sensitive space- or ground-based telescopes.
An origin for short gamma-ray bursts unassociated with current star formation.
Barthelmy, S D; Chincarini, G; Burrows, D N; Gehrels, N; Covino, S; Moretti, A; Romano, P; O'Brien, P T; Sarazin, C L; Kouveliotou, C; Goad, M; Vaughan, S; Tagliaferri, G; Zhang, B; Antonelli, L A; Campana, S; Cummings, J R; D'Avanzo, P; Davies, M B; Giommi, P; Grupe, D; Kaneko, Y; Kennea, J A; King, A; Kobayashi, S; Melandri, A; Meszaros, P; Nousek, J A; Patel, S; Sakamoto, T; Wijers, R A M J
2005-12-15
Two short (< 2 s) gamma-ray bursts (GRBs) have recently been localized and fading afterglow counterparts detected. The combination of these two results left unclear the nature of the host galaxies of the bursts, because one was a star-forming dwarf, while the other was probably an elliptical galaxy. Here we report the X-ray localization of a short burst (GRB 050724) with unusual gamma-ray and X-ray properties. The X-ray afterglow lies off the centre of an elliptical galaxy at a redshift of z = 0.258 (ref. 5), coincident with the position determined by ground-based optical and radio observations. The low level of star formation typical for elliptical galaxies makes it unlikely that the burst originated in a supernova explosion. A supernova origin was also ruled out for GRB 050709 (refs 3, 31), even though that burst took place in a galaxy with current star formation. The isotropic energy for the short bursts is 2-3 orders of magnitude lower than that for the long bursts. Our results therefore suggest that an alternative source of bursts--the coalescence of binary systems of neutron stars or a neutron star-black hole pair--are the progenitors of short bursts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenke, P. A.; Linares, M.; Connaughton, V.
The Fermi Gamma-ray Burst Monitor (GBM) is an all-sky gamma-ray monitor well known in the gamma-ray burst (GRB) community. Although GBM excels in detecting the hard, bright extragalactic GRBs, its sensitivity above 8 keV and its all-sky view make it an excellent instrument for the detection of rare, short-lived Galactic transients. In 2010 March, we initiated a systematic search for transients using GBM data. We conclude this phase of the search by presenting a three-year catalog of 1084 X-ray bursts. Using spectral analysis, location, and spatial distributions we classified the 1084 events into 752 thermonuclear X-ray bursts, 267 transient eventsmore » from accretion flares and X-ray pulses, and 65 untriggered gamma-ray bursts. All thermonuclear bursts have peak blackbody temperatures broadly consistent with photospheric radius expansion (PRE) bursts. We find an average rate of 1.4 PRE bursts per day, integrated over all Galactic bursters within about 10 kpc. These include 33 and 10 bursts from the ultra-compact X-ray binaries 4U 0614+09 and 2S 0918-549, respectively. We discuss these recurrence times and estimate the total mass ejected by PRE bursts in our Galaxy.« less
High-Speed Burst-Mode Clock and Data Recovery Circuits for Multiaccess Networks
NASA Astrophysics Data System (ADS)
Shastri, Bhavin J.
Optical multiaccess networks, and specifically passive optical networks (PONs) are considered to be the most promising technologies for the deployment of fiber-to-the-premises/home/user (FTTx) to solve the problem of limited bandwidth in local area networks with a low-cost solution and a guaranteed quality of service. In a PON, multiple users share the fiber infrastructure in a point-to-multipoint (P2MP) network. This topology introduce optical path delays which inherently cause the data packets to undergo amplitude variations up to 20 dB and phase variations from --2pi to +2pi rad--burst-mode traffic. Consequently, this creates new challenges for the design and test of optical receivers front-ends and clock and data recovery circuits (CDRs), in particular, burst-mode CDRs (BM-CDRs). The research presented in this thesis investigates BM-CDRs, both theoretically and experimentally. We demonstrate two novel BM-CDR architectures. These BM-CDRs achieve error-free operation [bit error rate (BER) <10e--10] while providing instantaneous (0 preamble bit) clock phase acquisition for any phase step (+/-2pi rad) between successive bursts. Instantaneous phase acquisition improves the physical efficiency of upstream PON traffic, and increases the effective throughput of the system by raising the information rate. Our eloquent, scalable BM-CDR architectures leverage the design of low complexity commercial electronics providing a cost-effective solution for PONs. The first BM-CDR (rated at 5 Gb/s) is based on phase-tracking time domain oversampling (semiblind) CDR operated at 2x the bit rate and a clock phase aligner (CPA) that makes use of a phase picking algorithm. The second BM-CDR (rate at 10 Gb/s) is based on semiblind space domain oversampling and employs a phase-tracking CDR with multiphase clocks at the bit rate and a CPA with a novel phase picking algorithm. We experimentally demonstrate these BM-CDRs in optical test beds and study the effect of channel-impairments in: (1) 5 Gb/s time-division multiplexing gigabit PON 20-km uplink; (2) 2.5 Gb/s overlapped subcarrier-multiplexing wavelength-division multiplexed PON 20-km uplink; (3) 1.25 Gb/s 1300-km deployed fiber link spanning Montreal--Quebec City and back; and (4) 622 Mb/s in a 7-user spectral amplitude-coded optical code-division multiple access 20-km PON uplink. We also provide a theoretical framework to model and analyze BM-CDRs. We develop a unified probabilistic theory of BM-CDRs based on semiblind oversampling techniques in either the time or space domain. This theory has also been generalized for conventional CDRs and Nx-oversampling CDRs. Based on this theory, we perform a comprehensive theoretical analysis to quantify the performance of the proposed BM-CDRs in terms of the BER and packet loss ratio to assess the tradeoffs between various parameters, and compare the results experimentally to validate the theoretical model. This analysis coupled with the experimental results will refine theoretical models PONs, and provide input for establishing realistic power budgets.
2009-11-19
CAPE CANAVERAL, Fla. - At the Astronaut Hall of Fame near NASA’s Kennedy Space Center in Florida, the winners of the 2009 Astronaut Glove Challenge, part of NASA’s Centennial Challenges Program, pose for a group photograph with their friends, family and the event organizers. From left are Caroline Homer and her father, Peter Homer, winner of the $250,000 first prize; Alan Hayes, chairman of Volanz Aerospace Inc.; Andy Petro, manager of NASA Centennial Challenges; Ted Southern, winner of the $100,000 second prize; his friend and glove tester Amy Miller; and Paul Secor, Secor Strategies LLC. The nationwide competition focused on developing improved pressure suit gloves for astronauts to use while working in space. During the challenge, the gloves were submitted to burst tests, joint force tests and tests to measure their dexterity and strength during operation in a glove box which simulates the vacuum of space. Centennial Challenges is NASA’s program of technology prizes for the citizen-inventor. The winning prize for the Glove Challenge is $250,000 provided by the Centennial Challenges Program. Photo credit: NASA/Kim Shiflett
TMS evidence for a selective role of the precuneus in source memory retrieval.
Bonnì, Sonia; Veniero, Domenica; Mastropasqua, Chiara; Ponzo, Viviana; Caltagirone, Carlo; Bozzali, Marco; Koch, Giacomo
2015-04-01
The posteromedial cortex including the precuneus (PC) is thought to be involved in episodic memory retrieval. Here we used continuous theta burst stimulation (cTBS) to disentangle the role of the precuneus in the recognition memory process in a sample of healthy subjects. During the encoding phase, subjects were presented with a series of colored pictures. Afterwards, during the retrieval phase, all previously presented items and a sample of new pictures were presented in black, and subjects were asked to indicate whether each item was new or old, and in the latter case to indicate the associated color. cTBS was delivered over PC, posterior parietal cortex (PPC) and vertex before the retrieval phase. The data were analyzed in terms of hits, false alarms, source errors and omissions. cTBS over the precuneus, but not over the PPC or the vertex, induced a selective decrease in source memory errors, indicating an improvement in context retrieval. All the other accuracy measurements were unchanged. These findings suggest a direct implication of the precuneus in successful context-dependent retrieval. Copyright © 2015 Elsevier B.V. All rights reserved.
Out-of-plane ultrasonic velocity measurement
Hall, M.S.; Brodeur, P.H.; Jackson, T.G.
1998-07-14
A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated. 20 figs.
Out-of-plane ultrasonic velocity measurement
Hall, Maclin S.; Brodeur, Pierre H.; Jackson, Theodore G.
1998-01-01
A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated.
Lagishetty, Chakradhar V; Duffull, Stephen B
2015-11-01
Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.
Long-term neuromuscular training and ankle joint position sense.
Kynsburg, A; Pánics, G; Halasi, T
2010-06-01
Preventive effect of proprioceptive training is proven by decreasing injury incidence, but its proprioceptive mechanism is not. Major hypothesis: the training has a positive long-term effect on ankle joint position sense in athletes of a high-risk sport (handball). Ten elite-level female handball-players represented the intervention group (training-group), 10 healthy athletes of other sports formed the control-group. Proprioceptive training was incorporated into the regular training regimen of the training-group. Ankle joint position sense function was measured with the "slope-box" test, first described by Robbins et al. Testing was performed one day before the intervention and 20 months later. Mean absolute estimate errors were processed for statistical analysis. Proprioceptive sensory function improved regarding all four directions with a high significance (p<0.0001; avg. mean estimate error improvement: 1.77 degrees). This was also highly significant (p< or =0.0002) in each single directions, with avg. mean estimate error improvement between 1.59 degrees (posterior) and 2.03 degrees (anterior). Mean absolute estimate errors at follow-up (2.24 degrees +/-0.88 degrees) were significantly lower than in uninjured controls (3.29 degrees +/-1.15 degrees) (p<0.0001). Long-term neuromuscular training has improved ankle joint position sense function in the investigated athletes. This joint position sense improvement can be one of the explanations for injury rate reduction effect of neuromuscular training.
Research and development of the laser tracker measurement system
NASA Astrophysics Data System (ADS)
Zhang, Z. L.; Zhou, W. H.; Lao, D. B.; Yuan, J.; Dong, D. F. F.; Ji, R. Y. Y.
2013-01-01
The working principle and system design of the laser tracker measurement system are introduced, as well as the key technologies and solutions in the implementation of the system. The design and implementation of the hardware and configuration of the software are mainly researched. The components of the hardware include distance measuring unit, angle measuring unit, tracking and servo control unit and electronic control unit. The distance measuring devices include the relative distance measuring device (IFM) and the absolute distance measuring device (ADM). The main component of the angle measuring device, the precision rotating stage, is mainly comprised of the precision axis and the encoders which are both set in the tracking head. The data processing unit, tracking and control unit and power supply unit are all set in the control box. The software module is comprised of the communication module, calibration and error compensation module, data analysis module, database management module, 3D display module and the man-machine interface module. The prototype of the laser tracker system has been accomplished and experiments have been carried out to verify the proposed strategies of the hardware and software modules. The experiments showed that the IFM distance measuring error is within 0.15mm, the ADM distance measuring error is within 3.5mm and the angle measuring error is within 3" which demonstrates that the preliminary prototype can realize fundamental measurement tasks.
NASA Astrophysics Data System (ADS)
Gu, Hua-Guang; Chen, Sheng-Gen; Li, Yu-Ye
2015-05-01
We investigated the synchronization dynamics of a coupled neuronal system composed of two identical Chay model neurons. The Chay model showed coexisting period-1 and period-2 bursting patterns as a parameter and initial values are varied. We simulated multiple periodic and chaotic bursting patterns with non-(NS), burst phase (BS), spike phase (SS), complete (CS), and lag synchronization states. When the coexisting behavior is near period-2 bursting, the transitions of synchronization states of the coupled system follows very complex transitions that begins with transitions between BS and SS, moves to transitions between CS and SS, and to CS. Most initial values lead to the CS state of period-2 bursting while only a few lead to the CS state of period-1 bursting. When the coexisting behavior is near period-1 bursting, the transitions begin with NS, move to transitions between SS and BS, to transitions between SS and CS, and then to CS. Most initial values lead to the CS state of period-1 bursting but a few lead to the CS state of period-2 bursting. The BS was identified as chaos synchronization. The patterns for NS and transitions between BS and SS are insensitive to initial values. The patterns for transitions between CS and SS and the CS state are sensitive to them. The number of spikes per burst of non-CS bursting increases with increasing coupling strength. These results not only reveal the initial value- and parameter-dependent synchronization transitions of coupled systems with coexisting behaviors, but also facilitate interpretation of various bursting patterns and synchronization transitions generated in the nervous system with weak coupling strength. Project supported by the National Natural Science Foundation of China (Grant Nos. 11372224 and 11402039) and the Fundamental Research Funds for Central Universities designated to Tongji University (Grant No. 1330219127).
A search for dispersed radio bursts in archival Parkes Multibeam Pulsar Survey data
NASA Astrophysics Data System (ADS)
Bagchi, Manjari; Nieves, Angela Cortes; McLaughlin, Maura
2012-10-01
A number of different classes of potentially extra-terrestrial bursts of radio emission have been observed in surveys with the Parkes 64-m radio telescope, including 'rotating radio transients', the 'Lorimer burst' and 'perytons'. Rotating radio transients are radio pulsars which are best detectable in single-pulse searches. The Lorimer burst is a highly dispersed isolated radio burst with properties suggestive of extragalactic origin. Perytons share the frequency-swept nature of the rotating radio transients and Lorimer burst, but unlike these events appear in all 13 beams of the Parkes multibeam receiver and are probably a form of peculiar radio frequency interference. In order to constrain these and other radio source populations further, we searched the archival Parkes Multibeam Pulsar Survey data for events similar to any of these. We did not find any new rotating radio transients or bursts like the Lorimer burst. We did, however, discover four peryton-like events. Similar to the perytons, these four bursts are highly dispersed, detected in all 13 beams of the Parkes multibeam receiver, and have pulse widths between 20 and 30 ms. Unlike perytons, these bursts are not associated with atmospheric events like rain or lightning. These facts may indicate that lightning was not responsible for the peryton phenomenon. Moreover, the lack of highly dispersed celestial signals is the evidence that the Lorimer burst is unlikely to belong to a cosmological source population.
Probing Intrinsic Properties of Short Gamma-Ray Bursts with Gravitational Waves.
Fan, Xilong; Messenger, Christopher; Heng, Ik Siong
2017-11-03
Progenitors of short gamma-ray bursts are thought to be neutron stars coalescing with their companion black hole or neutron star, which are one of the main gravitational wave sources. We have devised a Bayesian framework for combining gamma-ray burst and gravitational wave information that allows us to probe short gamma-ray burst luminosities. We show that combined short gamma-ray burst and gravitational wave observations not only improve progenitor distance and inclination angle estimates, they also allow the isotropic luminosities of short gamma-ray bursts to be determined without the need for host galaxy or light-curve information. We characterize our approach by simulating 1000 joint short gamma-ray burst and gravitational wave detections by Advanced LIGO and Advanced Virgo. We show that ∼90% of the simulations have uncertainties on short gamma-ray burst isotropic luminosity estimates that are within a factor of two of the ideal scenario, where the distance is known exactly. Therefore, isotropic luminosities can be confidently determined for short gamma-ray bursts observed jointly with gravitational waves detected by Advanced LIGO and Advanced Virgo. Planned enhancements to Advanced LIGO will extend its range and likely produce several joint detections of short gamma-ray bursts and gravitational waves. Third-generation gravitational wave detectors will allow for isotropic luminosity estimates for the majority of the short gamma-ray burst population within a redshift of z∼1.
Spatial-temporal variation of low-frequency earthquake bursts near Parkfield, California
Wu, Chunquan; Guyer, Robert; Shelly, David R.; Trugman, D.; Frank, William; Gomberg, Joan S.; Johnson, P.
2015-01-01
Tectonic tremor (TT) and low-frequency earthquakes (LFEs) have been found in the deeper crust of various tectonic environments globally in the last decade. The spatial-temporal behaviour of LFEs provides insight into deep fault zone processes. In this study, we examine recurrence times from a 12-yr catalogue of 88 LFE families with ∼730 000 LFEs in the vicinity of the Parkfield section of the San Andreas Fault (SAF) in central California. We apply an automatic burst detection algorithm to the LFE recurrence times to identify the clustering behaviour of LFEs (LFE bursts) in each family. We find that the burst behaviours in the northern and southern LFE groups differ. Generally, the northern group has longer burst duration but fewer LFEs per burst, while the southern group has shorter burst duration but more LFEs per burst. The southern group LFE bursts are generally more correlated than the northern group, suggesting more coherent deep fault slip and relatively simpler deep fault structure beneath the locked section of SAF. We also found that the 2004 Parkfield earthquake clearly increased the number of LFEs per burst and average burst duration for both the northern and the southern groups, with a relatively larger effect on the northern group. This could be due to the weakness of northern part of the fault, or the northwesterly rupture direction of the Parkfield earthquake.
Bursting synchronization dynamics of pancreatic β-cells with electrical and chemical coupling.
Meng, Pan; Wang, Qingyun; Lu, Qishao
2013-06-01
Based on bifurcation analysis, the synchronization behaviors of two identical pancreatic β-cells connected by electrical and chemical coupling are investigated, respectively. Various firing patterns are produced in coupled cells when a single cell exhibits tonic spiking or square-wave bursting individually, irrespectively of what the cells are connected by electrical or chemical coupling. On the one hand, cells can burst synchronously for both weak electrical and chemical coupling when an isolated cell exhibits tonic spiking itself. In particular, for electrically coupled cells, under the variation of the coupling strength there exist complex transition processes of synchronous firing patterns such as "fold/limit cycle" type of bursting, then anti-phase continuous spiking, followed by the "fold/torus" type of bursting, and finally in-phase tonic spiking. On the other hand, it is shown that when the individual cell exhibits square-wave bursting, suitable coupling strength can make the electrically coupled system generate "fold/Hopf" bursting via "fold/fold" hysteresis loop; whereas, the chemically coupled cells generate "fold/subHopf" bursting. Especially, chemically coupled bursters can exhibit inverse period-adding bursting sequence. Fast-slow dynamics analysis is applied to explore the generation mechanism of these bursting oscillations. The above analysis of bursting types and the transition may provide us with better insight into understanding the role of coupling in the dynamic behaviors of pancreatic β-cells.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
Evaluation of statistical models for forecast errors from the HBV model
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur
2010-04-01
SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.
NASA Astrophysics Data System (ADS)
Rivière, G.; Hua, B. L.
2004-10-01
A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.
Is Single-Port Laparoscopy More Precise and Faster with the Robot?
Fransen, Sofie A F; van den Bos, Jacqueline; Stassen, Laurents P S; Bouvy, Nicole D
2016-11-01
Single-port laparoscopy is a step forward toward nearly scar less surgery. Concern has been raised that single-incision laparoscopic surgery (SILS) is technically more challenging because of the lack of triangulation and the clashing of instruments. Robotic single-incision laparoscopic surgery (RSILS) in chopstick setting might overcome these problems. This study evaluated the outcome in time and errors of two tasks of the Fundamentals of Laparoscopic Surgery on a dry platform, in two settings: SILS versus RSILS. Nine experienced laparoscopic surgeons performed two tasks: peg transfer and a suturing task, on a standard box trainer. All participants practiced each task three times in both settings: SILS and a RSILS setting. The assessment scores (time and errors) were recorded. For the first task of peg transfer, RSILS was significantly better in time (124 versus 230 seconds, P = .0004) and errors (0.80 errors versus 2.60 errors, P = .024) at the first run, compared to the SILS setting. At the third and final run, RSILS still proved to be significantly better in errors (0.10 errors versus 0.80 errors, P = .025) compared to the SILS group. RSILS was faster in the third run, but not significant (116 versus 157 seconds, P = .08). For the second task, a suturing task, only 3 participants of the SILS group were able to perform this task within the set time frame of 600 seconds. There was no significant difference in time in the three runs between SILS and RSILS for the 3 participants that fulfilled both tasks within the 600 seconds. This study shows that robotic single-port surgery seems easier, faster, and more precise to perform basis tasks of the Fundamentals of laparoscopic surgery. For the more complex task of suturing, only the single-port robotic setting enabled all participants to fulfill this task, within the set time frame.
Solar microwave bursts - A review
NASA Technical Reports Server (NTRS)
Kundu, M. R.; Vlahos, L.
1982-01-01
Observational and theoretical results on the physics of microwave bursts that occur in the solar atmosphere are reviewed. Special attention is given to the advances made in burst physics over the last few years with the great improvement in spatial and time resolution, especially with instruments like the NRAO three-element interferometer, the Westerbork Synthesis Radio Telescope, and more recently the Very Large Array. Observations made on the preflare build-up of an active region at centimeter wavelengths are reviewed. Three distinct phases in the evolution of cm bursts, namely the impulsive phase, the post-burst phase, and the gradual rise and fall, are discussed. Attention is also given to the flux density spectra of centimeter bursts. Descriptions are given of observations of fine structures with temporal resolution of 10-100 ms in the intensity profiles of cm-wavelength bursts. High spatial resolution observations are analyzed, with special reference to the one- and two-dimensional maps of cm burst sources.
Transcriptional bursting is intrinsically caused by interplay between RNA polymerases on DNA
NASA Astrophysics Data System (ADS)
Fujita, Keisuke; Iwaki, Mitsuhiro; Yanagida, Toshio
2016-12-01
Cell-to-cell variability plays a critical role in cellular responses and decision-making in a population, and transcriptional bursting has been broadly studied by experimental and theoretical approaches as the potential source of cell-to-cell variability. Although molecular mechanisms of transcriptional bursting have been proposed, there is little consensus. An unsolved key question is whether transcriptional bursting is intertwined with many transcriptional regulatory factors or is an intrinsic characteristic of RNA polymerase on DNA. Here we design an in vitro single-molecule measurement system to analyse the kinetics of transcriptional bursting. The results indicate that transcriptional bursting is caused by interplay between RNA polymerases on DNA. The kinetics of in vitro transcriptional bursting is quantitatively consistent with the gene-nonspecific kinetics previously observed in noisy gene expression in vivo. Our kinetic analysis based on a cellular automaton model confirms that arrest and rescue by trailing RNA polymerase intrinsically causes transcriptional bursting.
Impulsive EUV bursts observed in C IV with OSO-8. [UV solar spectra
NASA Technical Reports Server (NTRS)
Athay, R. G.; White, O. R.; Lites, B. W.; Bruner, E. C., Jr.
1980-01-01
Time sequences of profiles of the 1548 A line of C IV containing 51 EUV bursts observed in or near active regions are analyzed to determine the brightness, Doppler shift and line broadening characteristics of the bursts. The bursts have mean lifetimes of approximately 150 s, and mean increases in brightness at burst maximum of four-fold as observed with a field of view of 2 x 20 arc sec. Mean burst diameters are estimated to be 3 arc sec, or smaller. All but three of the bursts show Doppler shifts with velocities sometimes exceeding 75 km/s; 31 are dominated by red shifts and 17 are dominated by blue shifts. Approximately half of the latter group have red-shifted precursors. The bursts are interpreted as prominence material, such as surges and coronal rain, moving through the field of view of the spectrometer.
The continuum spectral characteristics of gamma ray bursts observed by BATSE
NASA Technical Reports Server (NTRS)
Pendleton, Geoffrey N.; Paciesas, William S.; Briggs, Michael S.; Mallozzi, Robert S.; Koshut, Tom M.; Fishman, Gerald J.; Meegan, Charles A.; Wilson, Robert B.; Harmon, Alan B.; Kouveliotou, Chryssa
1994-01-01
Distributions of the continuum spectral characteristics of 260 bursts in the first Burst and Transient Source Experiment (BATSE) catalog are presented. The data are derived from flux ratios calculated from the BATSE Large Area Detector (LAD) four channel discriminator data. The data are converted from counts to photons using a direct spectral inversion technique to remove the effects of atmospheric scattering and the energy dependence of the detector angular response. Although there are intriguing clusterings of bursts in the spectral hardness ratio distributions, no evidence for the presence of distinct burst classes based on spectral hardness ratios alone is found. All subsets of bursts selected for their spectral characteristics in this analysis exhibit spatial distributions consistent with isotropy. The spectral diversity of the burst population appears to be caused largely by the highly variable nature of the burst production mechanisms themselves.
Simultaneous X-Ray, Gamma-Ray, and Radio Observations of the Repeating Fast Radio Burst FRB 121102
NASA Astrophysics Data System (ADS)
Scholz, P.; Bogdanov, S.; Hessels, J. W. T.; Lynch, R. S.; Spitler, L. G.; Bassa, C. G.; Bower, G. C.; Burke-Spolaor, S.; Butler, B. J.; Chatterjee, S.; Cordes, J. M.; Gourdji, K.; Kaspi, V. M.; Law, C. J.; Marcote, B.; McLaughlin, M. A.; Michilli, D.; Paragi, Z.; Ransom, S. M.; Seymour, A.; Tendulkar, S. P.; Wharton, R. S.
2017-09-01
We undertook coordinated campaigns with the Green Bank, Effelsberg, and Arecibo radio telescopes during Chandra X-ray Observatory and XMM-Newton observations of the repeating fast radio burst FRB 121102 to search for simultaneous radio and X-ray bursts. We find 12 radio bursts from FRB 121102 during 70 ks total of X-ray observations. We detect no X-ray photons at the times of radio bursts from FRB 121102 and further detect no X-ray bursts above the measured background at any time. We place a 5σ upper limit of 3 × 10‑11 erg cm‑2 on the 0.5–10 keV fluence for X-ray bursts at the time of radio bursts for durations < 700 ms, which corresponds to a burst energy of 4 × 1045 erg at the measured distance of FRB 121102. We also place limits on the 0.5–10 keV fluence of 5 × 10‑10 and 1 × 10‑9 erg cm‑2 for bursts emitted at any time during the XMM-Newton and Chandra observations, respectively, assuming a typical X-ray burst duration of 5 ms. We analyze data from the Fermi Gamma-ray Space Telescope Gamma-ray Burst Monitor and place a 5σ upper limit on the 10–100 keV fluence of 4 × 10‑9 erg cm‑2 (5 × 1047 erg at the distance of FRB 121102) for gamma-ray bursts at the time of radio bursts. We also present a deep search for a persistent X-ray source using all of the X-ray observations taken to date and place a 5σ upper limit on the 0.5–10 keV flux of 4 × 10‑15 erg s‑1 cm‑2 (3 × 1041 erg s‑1 at the distance of FRB 121102). We discuss these non-detections in the context of the host environment of FRB 121102 and of possible sources of fast radio bursts in general.
Long-Lag, Wide-pulse Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Norris, J. P.; Bonnell, J. T.; Kazanas, D.; Scargle, . D.; Hakkila, J.; Giblin, T. W.
2004-01-01
Currently, the best available probe of the early phase of gamma-ray burst (GRB) jet attributes is the prompt gamma-ray emission, in which several intrinsic and extrinsic variables determine GRB pulse evolution. Bright, usually complex bursts have many narrow pulses that are difficult to model due to overlap. However, the relatively simple, long spectral lag, wide-pulse bursts may have simpler physics and are easier to model. In this work we analyze the temporal and spectral behavior of wide pulses in 24 long-lag bursts, using a pulse model with two shape parameters - width and asymmetry - and the Band spectral model with three shape parameters. We find that pulses in long-lag bursts are distinguished both temporally and spectrally from those in bright bursts: the pulses in long spectral lag bursts are few in number, and approximately 100 times wider (10s of seconds), have systematically lower peaks in vF(v), harder low-energy spectra and softer high-energy spectra. We find that these five pulse descriptors are essentially uncorrelated for our long-lag sample, suggesting that at least approximately 5 parameters are needed to model burst temporal and spectral behavior. However, pulse width is strongly correlated with spectral lag; hence these two parameters may be viewed as mutual surrogates. We infer that accurate formulations for estimating GRB luminosity and total energy will depend on several gamma-ray attributes, at least for long-lag bursts. The prevalence of long-lag bursts near the BATSE trigger threshold, their predominantly low vF(v) spectral peaks, and relatively steep upper power-law spectral indices indicate that Swift will detect many such bursts.