Analysis and Assessment of Peak Lightning Current Probabilities at the NASA Kennedy Space Center
NASA Technical Reports Server (NTRS)
Johnson, D. L.; Vaughan, W. W.
1999-01-01
This technical memorandum presents a summary by the Electromagnetics and Aerospace Environments Branch at the Marshall Space Flight Center of lightning characteristics and lightning criteria for the protection of aerospace vehicles. Probability estimates are included for certain lightning strikes (peak currents of 200, 100, and 50 kA) applicable to the National Aeronautics and Space Administration Space Shuttle at the Kennedy Space Center, Florida, during rollout, on-pad, and boost/launch phases. Results of an extensive literature search to compile information on this subject are presented in order to answer key questions posed by the Space Shuttle Program Office at the Johnson Space Center concerning peak lightning current probabilities if a vehicle is hit by a lightning cloud-to-ground stroke. Vehicle-triggered lightning probability estimates for the aforementioned peak currents are still being worked. Section 4.5, however, does provide some insight on estimating these same peaks.
NASA Technical Reports Server (NTRS)
Johnson, Dale L.; Vaughan, William W.
1998-01-01
A summary is presented of basic lightning characteristics/criteria for current and future NASA aerospace vehicles. The paper estimates the probability of occurrence of a 200 kA peak lightning return current, should lightning strike an aerospace vehicle in various operational phases, i.e., roll-out, on-pad, launch, reenter/land, and return-to-launch site. A literature search was conducted for previous work concerning occurrence and measurement of peak lighting currents, modeling, and estimating probabilities of launch vehicles/objects being struck by lightning. This paper presents these results.
Lightning Strike Peak Current Probabilities as Related to Space Shuttle Operations
NASA Technical Reports Server (NTRS)
Johnson, Dale L.; Vaughan, William W.
2000-01-01
A summary is presented of basic lightning characteristics/criteria applicable to current and future aerospace vehicles. The paper provides estimates on the probability of occurrence of a 200 kA peak lightning return current, should lightning strike an aerospace vehicle in various operational phases, i.e., roll-out, on-pad, launch, reenter/land, and return-to-launch site. A literature search was conducted for previous work concerning occurrence and measurement of peak lighting currents, modeling, and estimating the probabilities of launch vehicles/objects being struck by lightning. This paper presents a summary of these results.
NASA Technical Reports Server (NTRS)
Idone, V. P.; Orville, R. E.
1985-01-01
The correlation between peak relative light intensity L(R) and stroke peak current I(R) is examined for 39 subsequent return strokes in two triggered lightning flashes. One flash contained 19 strokes and the other 20 strokes for which direct measurements were available of the return stroke peak current at ground. Peak currents ranged from 1.6 to 21 kA. The measurements of peak relative light intensity were obtained from photographic streak recordings using calibrated film and microsecond resolution. Correlations, significant at better than the 0.1 percent level, were found for several functional relationships. Although a relation between L(R) and I(R) is evident in these data, none of the analytical relations considered is clearly favored. The correlation between L(R) and the maximum rate of current rise is also examined, but less correlation than between L(R) and I(R) is found. In addition, the peak relative intensity near ground is evaluated for 22 dart leaders, and a mean ratio of peak dart leader to peak return stroke relative light intensity was found to be 0.1 with a range of 0.02-0.23. Using two different methods, the peak current near ground in these dart leaders is estimated to range from 0.1 to 6 kA.
Asquith, William H.; Thompson, David B.
2008-01-01
The U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, investigated a refinement of the regional regression method and developed alternative equations for estimation of peak-streamflow frequency for undeveloped watersheds in Texas. A common model for estimation of peak-streamflow frequency is based on the regional regression method. The current (2008) regional regression equations for 11 regions of Texas are based on log10 transformations of all regression variables (drainage area, main-channel slope, and watershed shape). Exclusive use of log10-transformation does not fully linearize the relations between the variables. As a result, some systematic bias remains in the current equations. The bias results in overestimation of peak streamflow for both the smallest and largest watersheds. The bias increases with increasing recurrence interval. The primary source of the bias is the discernible curvilinear relation in log10 space between peak streamflow and drainage area. Bias is demonstrated by selected residual plots with superimposed LOWESS trend lines. To address the bias, a statistical framework based on minimization of the PRESS statistic through power transformation of drainage area is described and implemented, and the resulting regression equations are reported. Compared to log10-exclusive equations, the equations derived from PRESS minimization have PRESS statistics and residual standard errors less than the log10 exclusive equations. Selected residual plots for the PRESS-minimized equations are presented to demonstrate that systematic bias in regional regression equations for peak-streamflow frequency estimation in Texas can be reduced. Because the overall error is similar to the error associated with previous equations and because the bias is reduced, the PRESS-minimized equations reported here provide alternative equations for peak-streamflow frequency estimation.
TASEP of interacting particles of arbitrary size
NASA Astrophysics Data System (ADS)
Narasimhan, S. L.; Baumgaertner, A.
2017-10-01
A mean-field description of the stationary state behaviour of interacting k-mers performing totally asymmetric exclusion processes (TASEP) on an open lattice segment is presented employing the discrete Takahashi formalism. It is shown how the maximal current and the phase diagram, including triple-points, depend on the strength of repulsive and attractive interactions. We compare the mean-field results with Monte Carlo simulation of three types interacting k-mers: monomers, dimers and trimers. (a) We find that the Takahashi estimates of the maximal current agree quantitatively with those of the Monte Carlo simulation in the absence of interaction as well as in both the the attractive and the strongly repulsive regimes. However, theory and Monte Carlo results disagree in the range of weak repulsion, where the Takahashi estimates of the maximal current show a monotonic behaviour, whereas the Monte Carlo data show a peaking behaviour. It is argued that the peaking of the maximal current is due to a correlated motion of the particles. In the limit of very strong repulsion the theory predicts a universal behavior: th maximal currents of k-mers correspond to that of non-interacting (k+1) -mers; (b) Monte Carlo estimates of the triple-points for monomers, dimers and trimers show an interesting general behaviour : (i) the phase boundaries α * and β* for entry and exit current, respectively, as function of interaction strengths show maxima for α* whereas β * exhibit minima at the same strength; (ii) in the attractive regime, however, the trend is reversed (β * > α * ). The Takahashi estimates of the triple-point for monomers show a similar trend as the Monte Carlo data except for the peaking of α * ; for dimers and trimers, however, the Takahashi estimates show an opposite trend as compared to the Monte Carlo data.
NASA Astrophysics Data System (ADS)
Somu, Vijaya Bhaskar
Apparent ionospheric reflection heights estimated using the zero-to-zero and peak-to-peak methods to measure skywave delay relative to the groundwave were compared for 108 first and 124 subsequent strokes observed at LOG in 2009. For either metric there was a considerable decrease in average re ection height for subsequent strokes relative to first strokes. Median uncertainties in daytime re ection heights did not exceed 0.7 km. The standard errors in mean re ection heights were less than 3% of the mean value. Apparent changes in re ection height (estimated using the peak-to-peak method) within individual ashes for 54 daytime and 11 nighttime events at distances ranging from 50 km to 330 km were compared. For daytime conditions, the majority of the ashes showed a monotonic decrease in re ection height. For nighttime ashes, the monotonic decrease was found to be considerably less frequent. The apparent ionospheric re ection height tends to increase with return-stroke peak current. In order to increase the sample size for nighttime conditions, additional data for 43 nighttime flashes observed at LOG in 2014 were analyzed. The "fast-break-point" method of measuring skywave delay (McDonald et al., 1979) was additionally used. The 2014 results for return strokes are generally consistent with the 2009 results. The 2014 data were also used for estimating ionospheric re ection heights for elevated sources (6 CIDs and 3 PB pulses) using the double-skywave feature. The results were compared with re ection heights estimated for corresponding return strokes (if any), and fairly good agreement was generally found. It has been shown, using two different FDTD simulation codes, that the observed differences in re ection height cannot be explained by the difference in the frequency content of first and subsequent return-stroke currents. FDTD simulations showed that within 200 km the re ection heights estimated using the peak-to-peak method are close to the hOE parameter of the ionospheric profile for both daytime and nighttime conditions and for both first and second skywaves. The TL model was used to estimate the radial extent of elves produced by the interaction of LEMP with the ionosphere as a function of return-stroke peak current. For a peak current of 100 kA and the speed equal to one-half of the speed of light, the expected radius of elves is 157 km. Skywaves associated with 24 return strokes in 6 lightning ashes triggered at CB in 2015 and recorded at LOG (at a distance of 45 km from CB) were not found for any of the strokes recorded. In contrast, natural-lightning strokes do produce skywaves at comparable distances. One possible reason is the difference in the higher-frequency content (field waveforms for triggered lightning are more narrow than for natural lightning).
Wood, Molly S.; Fosness, Ryan L.; Skinner, Kenneth D.; Veilleux, Andrea G.
2016-06-27
The U.S. Geological Survey, in cooperation with the Idaho Transportation Department, updated regional regression equations to estimate peak-flow statistics at ungaged sites on Idaho streams using recent streamflow (flow) data and new statistical techniques. Peak-flow statistics with 80-, 67-, 50-, 43-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities (1.25-, 1.50-, 2.00-, 2.33-, 5.00-, 10.0-, 25.0-, 50.0-, 100-, 200-, and 500-year recurrence intervals, respectively) were estimated for 192 streamgages in Idaho and bordering States with at least 10 years of annual peak-flow record through water year 2013. The streamgages were selected from drainage basins with little or no flow diversion or regulation. The peak-flow statistics were estimated by fitting a log-Pearson type III distribution to records of annual peak flows and applying two additional statistical methods: (1) the Expected Moments Algorithm to help describe uncertainty in annual peak flows and to better represent missing and historical record; and (2) the generalized Multiple Grubbs Beck Test to screen out potentially influential low outliers and to better fit the upper end of the peak-flow distribution. Additionally, a new regional skew was estimated for the Pacific Northwest and used to weight at-station skew at most streamgages. The streamgages were grouped into six regions (numbered 1_2, 3, 4, 5, 6_8, and 7, to maintain consistency in region numbering with a previous study), and the estimated peak-flow statistics were related to basin and climatic characteristics to develop regional regression equations using a generalized least squares procedure. Four out of 24 evaluated basin and climatic characteristics were selected for use in the final regional peak-flow regression equations.Overall, the standard error of prediction for the regional peak-flow regression equations ranged from 22 to 132 percent. Among all regions, regression model fit was best for region 4 in west-central Idaho (average standard error of prediction=46.4 percent; pseudo-R2>92 percent) and region 5 in central Idaho (average standard error of prediction=30.3 percent; pseudo-R2>95 percent). Regression model fit was poor for region 7 in southern Idaho (average standard error of prediction=103 percent; pseudo-R2<78 percent) compared to other regions because few streamgages in region 7 met the criteria for inclusion in the study, and the region’s semi-arid climate and associated variability in precipitation patterns causes substantial variability in peak flows.A drainage area ratio-adjustment method, using ratio exponents estimated using generalized least-squares regression, was presented as an alternative to the regional regression equations if peak-flow estimates are desired at an ungaged site that is close to a streamgage selected for inclusion in this study. The alternative drainage area ratio-adjustment method is appropriate for use when the drainage area ratio between the ungaged and gaged sites is between 0.5 and 1.5.The updated regional peak-flow regression equations had lower total error (standard error of prediction) than all regression equations presented in a 1982 study and in four of six regions presented in 2002 and 2003 studies in Idaho. A more extensive streamgage screening process used in the current study resulted in fewer streamgages used in the current study than in the 1982, 2002, and 2003 studies. Fewer streamgages used and the selection of different explanatory variables were likely causes of increased error in some regions compared to previous studies, but overall, regional peak‑flow regression model fit was generally improved for Idaho. The revised statistical procedures and increased streamgage screening applied in the current study most likely resulted in a more accurate representation of natural peak-flow conditions.The updated, regional peak-flow regression equations will be integrated in the U.S. Geological Survey StreamStats program to allow users to estimate basin and climatic characteristics and peak-flow statistics at ungaged locations of interest. StreamStats estimates peak-flow statistics with quantifiable certainty only when used at sites with basin and climatic characteristics within the range of input variables used to develop the regional regression equations. Both the regional regression equations and StreamStats should be used to estimate peak-flow statistics only in naturally flowing, relatively unregulated streams without substantial local influences to flow, such as large seeps, springs, or other groundwater-surface water interactions that are not widespread or characteristic of the respective region.
Methods for estimating magnitude and frequency of peak flows for small watersheds in Utah.
DOT National Transportation Integrated Search
2010-06-01
Determining discharge in a stream is important to the design of culverts, bridges, and other structures pertaining to : transportation systems. Currently in Utah regression equations exist to estimate recurrence flood year discharges for : rural wate...
NASA Astrophysics Data System (ADS)
Wei, Zhongbao; Meng, Shujuan; Tseng, King Jet; Lim, Tuti Mariana; Soong, Boon Hee; Skyllas-Kazacos, Maria
2017-03-01
An accurate battery model is the prerequisite for reliable state estimate of vanadium redox battery (VRB). As the battery model parameters are time varying with operating condition variation and battery aging, the common methods where model parameters are empirical or prescribed offline lacks accuracy and robustness. To address this issue, this paper proposes to use an online adaptive battery model to reproduce the VRB dynamics accurately. The model parameters are online identified with both the recursive least squares (RLS) and the extended Kalman filter (EKF). Performance comparison shows that the RLS is superior with respect to the modeling accuracy, convergence property, and computational complexity. Based on the online identified battery model, an adaptive peak power estimator which incorporates the constraints of voltage limit, SOC limit and design limit of current is proposed to fully exploit the potential of the VRB. Experiments are conducted on a lab-scale VRB system and the proposed peak power estimator is verified with a specifically designed "two-step verification" method. It is shown that different constraints dominate the allowable peak power at different stages of cycling. The influence of prediction time horizon selection on the peak power is also analyzed.
Re-Evaluation of the 1921 Peak Discharge at Skagit River near Concrete, Washington
Mastin, M.C.
2007-01-01
The peak discharge record at the U.S. Geological Survey (USGS) gaging station at Skagit River near Concrete, Washington, is a key record that has come under intense scrutiny by the scientific and lay person communities in the last 4 years. A peak discharge of 240,000 cubic feet per second for the flood on December 13, 1921, was determined in 1923 by USGS hydrologist James Stewart by means of a slope-area measurement. USGS then determined the peak discharges of three other large floods on the Skagit River (1897, 1909, and 1917) by extending the stage-discharge rating through the 1921 flood measurement. The 1921 estimate of peak discharge was recalculated by Flynn and Benson of the USGS after a channel roughness verification was completed based on the 1949 flood on the Skagit River. The 1949 recalculation indicated that the peak discharge probably was 6.2 percent lower than Stewart's original estimate but the USGS did not officially change the peak discharge from Stewart's estimate because it was not more than a 10-percent change (which is the USGS guideline for revising peak flows) and the estimate already had error bands of 15 percent. All these flood peaks are now being used by the U.S. Army Corps of Engineers to determine the 100-year flood discharge for the Skagit River Flood Study so any method to confirm or improve the 1921 peak discharge estimate is warranted. During the last 4 years, two floods have occurred on the Skagit River (2003, 2006) that has enabled the USGS to collect additional data, do further analysis, and yet again re-evaluate the 1921 peak discharge estimate. Since 1949, an island/bar in the study reach has reforested itself. This has complicated the flow hydraulics and made the most recent recalculation of the 1921 flood based on channel roughness verification that used 2003 and 2006 flood data less reliable. However, this recent recalculation did indicate that the original peak-discharge calculation by Stewart may be high, and it added to a body of evidence that indicates a revision in the 1921 peak discharge estimate is appropriate. The USGS has determined that a lower peak-discharge estimate (5.0 percent lower) similar to the 1949 estimates is most appropriate based on (1) a recalculation of the 1921 flood using a channel roughness verification from the 1949 flood data, (2) a recalculation of the 1921 flood using a channel roughness verification from 2003 and 2006 flood data, and (3) straight-line extension of the stage-discharge relation at the gage based on current-meter discharge measurements. Given the significance of the 1921 flood peak, revising the estimate is appropriate even though it is less than the 10-percent guideline established by the USGS for revision. Revising the peak is warranted because all work subsequent to 1921 point to the 1921 peak being lower than originally published.
Photoelectric return-stroke velocity and peak current estimates in natural and triggered lightning
NASA Technical Reports Server (NTRS)
Mach, Douglas M.; Rust, W. David
1989-01-01
Two-dimensional photoelectric return stroke velocities from 130 strokes are presented, including 86 negative natural, 41 negative triggered, one positive triggered, and two positive natural return strokes. For strokes starting near the ground and exceeding 500 m in length, the average velocity is 1.3 + or - 0.3 X 10 to the 8th m/s for natural return strokes and 1.2 + or - 0.3 X 10 to the 8th m/s for triggered return strokes. For strokes with lengths less than 500 m, the average velocities are slightly higher. Using the transmission line model (TLM), the shortest segment one-dimensional return stroke velocity, and either the maximum or plateau electric field, it is shown that natural strokes have a peak current distribution that is lognormal with a median value of 16 kA (maximum E) or 12 kA (plateau E). Triggered lightning has a medium peak current value of 21 kA (maximum E) or 15 kA (plateau E). Correlations are found between TLM peak currents and velocities for triggered and natural subsequent return strokes, but not between TLM peak currents and natural first return stroke velocities.
Makarov, Sergey N.; Yanamadala, Janakinadh; Piazza, Matthew W.; Helderman, Alex M.; Thang, Niang S.; Burnham, Edward H.; Pascual-Leone, Alvaro
2016-01-01
Goals Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of the present study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. Methods We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100,000 observation points, and two distinct pulse rise times, thus providing a representative number of different data sets for comparison, while also using other numerical data. Results Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. Conclusion The simple analytical model tested in the present study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. Significance At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women. PMID:26685221
NASA Astrophysics Data System (ADS)
Heckman, S.
2015-12-01
Modern lightning locating systems (LLS) provide real-time monitoring and early warning of lightningactivities. In addition, LLS provide valuable data for statistical analysis in lightning research. It isimportant to know the performance of such LLS. In the present study, the performance of the EarthNetworks Total Lightning Network (ENTLN) is studied using rocket-triggered lightning data acquired atthe International Center for Lightning Research and Testing (ICLRT), Camp Blanding, Florida.In the present study, 18 flashes triggered at ICLRT in 2014 were analyzed and they comprise of 78negative cloud-to-ground return strokes. The geometric mean, median, minimum, and maximum for thepeak currents of the 78 return strokes are 13.4 kA, 13.6 kA, 3.7 kA, and 38.4 kA, respectively. The peakcurrents represent typical subsequent return strokes in natural cloud-to-ground lightning.Earth Networks has developed a new data processor to improve the performance of their network. Inthis study, results are presented for the ENTLN data using the old processor (originally reported in 2014)and the ENTLN data simulated using the new processor. The flash detection efficiency, stroke detectionefficiency, percentage of misclassification, median location error, median peak current estimation error,and median absolute peak current estimation error for the originally reported data from old processorare 100%, 94%, 49%, 271 m, 5%, and 13%, respectively, and those for the simulated data using the newprocessor are 100%, 99%, 9%, 280 m, 11%, and 15%, respectively. The use of new processor resulted inhigher stroke detection efficiency and lower percentage of misclassification. It is worth noting that theslight differences in median location error, median peak current estimation error, and median absolutepeak current estimation error for the two processors are due to the fact that the new processordetected more number of return strokes than the old processor.
Lafon, Belen; Henin, Simon; Huang, Yu; Friedman, Daniel; Melloni, Lucia; Thesen, Thomas; Doyle, Werner; Buzsáki, György; Devinsky, Orrin; Parra, Lucas C; Liu, Anli
2018-02-28
It has come to our attention that we did not specify whether the stimulation magnitudes we report in this Article are peak amplitudes or peak-to-peak. All references to intensity given in mA in the manuscript refer to peak-to-peak amplitudes, except in Fig. 2, where the model is calibrated to 1 mA peak amplitude, as stated. In the original version of the paper we incorrectly calibrated the computational models to 1 mA peak-to-peak, rather than 1 mA peak amplitude. This means that we divided by a value twice as large as we should have. The correct estimated fields are therefore twice as large as shown in the original Fig. 2 and Supplementary Figure 11. The corrected figures are now properly calibrated to 1 mA peak amplitude. Furthermore, the sentence in the first paragraph of the Results section 'Intensity ranged from 0.5 to 2.5 mA (current density 0.125-0.625 mA mA/cm 2 ), which is stronger than in previous reports', should have read 'Intensity ranged from 0.5 to 2.5 mA peak to peak (peak current density 0.0625-0.3125 mA/cm 2 ), which is stronger than in previous reports.' These errors do not affect any of the Article's conclusions.
Ahearn, Elizabeth A.
2009-01-01
A spring nor'easter affected the East Coast of the United States from April 15 to 18, 2007. In Connecticut, rainfall varied from 3 inches to more than 7 inches. The combined effects of heavy rainfall over a short duration, high winds, and high tides led to widespread flooding, storm damage, power outages, evacuations, and disruptions to traffic and commerce. The storm caused at least 18 fatalities (none in Connecticut). A Presidential Disaster Declaration was issued on May 11, 2007, for two counties in western Connecticut - Fairfield and Litchfield. This report documents hydrologic and meteorologic aspects of the April 2007 flood and includes estimates of the magnitude of the peak discharges and peak stages during the flood at 28 streamflow-gaging stations in western Connecticut. These data were used to perform flood-frequency analyses. Flood-frequency estimates provided in this report are expressed in terms of exceedance probabilities (the probability of a flood reaching or exceeding a particular magnitude in any year). Flood-frequency estimates for the 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, and 0.002 exceedance probabilities (also expressed as 50-, 20-, 10-, 4-, 2-, 1-, and 0.2- percent exceedance probability, respectively) were computed for 24 of the 28 streamflow-gaging stations. Exceedance probabilities can further be expressed in terms of recurrence intervals (2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence interval, respectively). Flood-frequency estimates computed in this study were compared to the flood-frequency estimates used to derive the water-surface profiles in previously published Federal Emergency Management Agency (FEMA) Flood Insurance Studies. The estimates in this report update and supersede previously published flood-frequency estimates for streamflowgaging stations in Connecticut by incorporating additional years of annual peak discharges, including the peaks for the April 2007 flood. In the southwest coastal region of Connecticut, the April 2007 peak discharges for streamflow-gaging stations with records extending back to 1955 were the second highest peak discharges on record; the 1955 annual peak discharges are the highest peak discharges in the station records. In the Housatonic and South Central Coast Basins, the April 2007 peak discharges for streamflow-gaging stations with records extending back to 1930 or earlier ranked between the fourth and eighth highest discharges on record, with the 1936, 1938, and 1955 floods as the largest floods in the station records. The peak discharges for the April 2007 flood have exceedance probabilities ranging between 0.10 to 0.02 (a 10- to 2-percent chance of being exceeded in a given year, respectively) with the majority (80 percent) of the stations having exceedance probabilities between 0.10 to 0.04. At three stations - Norwalk River at South Wilton, Pootatuck River at Sandy Hook, and Still River at Robertsville - the April 2007 peak discharges have an exceedance probability of 0.02. Flood-frequency estimates made after the April 2007 flood were compared to flood-frequency estimates used to derive the water-surface profiles (also called flood profiles) in FEMA Flood Insurance Studies developed for communities. In general, the comparison indicated that at the 0.10 exceedance probability (a 10-percent change of being exceeded in a given year), the discharges from the current (2007) flood-frequency analysis are larger than the discharges in the FEMA Flood Insurance Studies, with a median change of about +10 percent. In contrast, at the 0.01 exceedance probability (a 1-percent change of being exceeded in a year), the discharges from the current flood-frequency analysis are smaller than the discharges in the FEMA Flood Insurance Studies, with a median change of about -13 percent. Several stations had more than + 25 percent change in discharges at the 0.10 exceedance probability and are in the following communities: Winchester (Still River at Robertsv
NASA Astrophysics Data System (ADS)
Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai
2016-10-01
To evaluate the continuous and instantaneous load capability of a battery, this paper describes a joint estimator for state-of-charge (SOC) and state-of-function (SOF) of lithium-ion batteries (LIB) based on Kalman filter (KF). The SOC is a widely used index for remain useful capacity left in a battery. The SOF represents the peak power capability of the battery. It can be determined by real-time SOC estimation and terminal voltage prediction, which can be derived from impedance parameters. However, the open-circuit-voltage (OCV) of LiFePO4 is highly nonlinear with SOC, which leads to the difficulties in SOC estimation. To solve these problems, this paper proposed an onboard SOC estimation method. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery, where the OCV is regarded as a linearized function of SOC. Then, the system states are estimated based on the KF. Besides, the factors that influence peak power capability are analyzed according to statistical data. Finally, the performance of the proposed methodology is demonstrated by experiments conducted on a LiFePO4 LIBs under different operating currents and temperatures. Experimental results indicate that the proposed approach is suitable for battery onboard SOC and SOF estimation.
NASA Astrophysics Data System (ADS)
Fan, Tingting; Yuan, Ping; Wang, Xuejuan; Cen, Jianyong; Chang, Xuan; Zhao, Yanyan
2017-09-01
The spectra of two negative cloud-to-ground lightning discharge processes with multi-return strokes are obtained by a slit-less high-speed spectrograph, which the temporal resolution is 110 μs. Combined with the synchronous electrical observation data and theoretical calculation, the physical characteristics during return strokes process are analysed. A positive correlation between discharge current and intensity of ionic lines in the spectra is verified, and based on this feature, the current evolution characteristics during four return strokes are investigated. The results show that the time from peak current to the half-peak value estimated by multi point-fitting is about 101 μs-139 μs. The Joule heat in per unit length of four return strokes channel is in the order of 105J/m-106 J/m. The radius of arc discharge channel is positively related to the discharge current, and the more intense the current is, the greater the radius of channel is. Furthermore, the evolution for radius of arc core channel in the process of return stroke is consistent with the change trend of discharge current after the peak value. Compared with the decay of the current, the temperature decreases more slowly.
Disruption of crystalline structure of Sn3.5Ag induced by electric current
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Han-Chie; Lin, Kwang-Lung, E-mail: matkllin@mail.ncku.edu.tw; Wu, Albert T.
2016-03-21
This study presented the disruption of the Sn and Ag{sub 3}Sn lattice structures of Sn3.5Ag solder induced by electric current at 5–7 × 10{sup 3} A/cm{sup 2} with a high resolution transmission electron microscope investigation and electron diffraction analysis. The electric current stressing induced a high degree of strain on the alloy, as estimated from the X-ray diffraction (XRD) peak shift of the current stressed specimen. The XRD peak intensity of the Sn matrix and the Ag{sub 3}Sn intermetallic compound diminished to nearly undetectable after 2 h of current stressing. The electric current stressing gave rise to a high dislocation density ofmore » up to 10{sup 17}/m{sup 2}. The grain morphology of the Sn matrix became invisible after prolonged current stressing as a result of the coalescence of dislocations.« less
Shipborne LF-VLF oceanic lightning observations and modeling
NASA Astrophysics Data System (ADS)
Zoghzoghy, F. G.; Cohen, M. B.; Said, R. K.; Lehtinen, N. G.; Inan, U. S.
2015-10-01
Approximately 90% of natural lightning occurs over land, but recent observations, using Global Lightning Detection (GLD360) geolocation peak current estimates and satellite optical data, suggested that cloud-to-ground flashes are on average stronger over the ocean. We present initial statistics from a novel experiment using a Low Frequency (LF) magnetic field receiver system installed aboard the National Oceanic Atmospheric Agency (NOAA) Ronald W. Brown research vessel that allowed the detection of impulsive radio emissions from deep-oceanic discharges at short distances. Thousands of LF waveforms were recorded, facilitating the comparison of oceanic waveforms to their land counterparts. A computationally efficient electromagnetic radiation model that accounts for propagation over lossy and curved ground is constructed and compared with previously published models. We include the effects of Earth curvature on LF ground wave propagation and quantify the effects of channel-base current risetime, channel-base current falltime, and return stroke speed on the radiated LF waveforms observed at a given distance. We compare simulation results to data and conclude that previously reported larger GLD360 peak current estimates over the ocean are unlikely to fully result from differences in channel-base current risetime, falltime, or return stroke speed between ocean and land flashes.
NASA Astrophysics Data System (ADS)
Stange, P.; Bach, L. T.; Le Moigne, F. A. C.; Taucher, J.; Boxhammer, T.; Riebesell, U.
2017-01-01
The ocean's potential to export carbon to depth partly depends on the fraction of primary production (PP) sinking out of the euphotic zone (i.e., the e-ratio). Measurements of PP and export flux are often performed simultaneously in the field, although there is a temporal delay between those parameters. Thus, resulting e-ratio estimates often incorrectly assume an instantaneous downward export of PP to export flux. Evaluating results from four mesocosm studies, we find that peaks in organic matter sedimentation lag chlorophyll a peaks by 2 to 15 days. We discuss the implications of these time lags (TLs) for current e-ratio estimates and evaluate potential controls of TL. Our analysis reveals a strong correlation between TL and the duration of chlorophyll a buildup, indicating a dependency of TL on plankton food web dynamics. This study is one step further toward time-corrected e-ratio estimates.
Integration of biogenic emissions in environmental fate, transport, and exposure systems
NASA Astrophysics Data System (ADS)
Efstathiou, Christos I.
Biogenic emissions make a significant contribution to the levels of aeroallergens and secondary air pollutants such as ozone. Understanding major factors contributing to allergic airway diseases requires accurate characterization of emissions and transport/transformation of biogenic emissions. However, biogenic emission estimates are laden with large uncertainties. Furthermore, the current biogenic emission estimation models use low-resolution data for estimating land use, vegetation biomass and VOC emissions. Furthermore, there are currently no established methods for estimating bioaerosol emissions over continental or regional scale, which can impact the ambient levels of pollent that have synergestic effects with other gaseous pollutants. In the first part of the thesis, an detailed review of different approaches and available databases for estimating biogenic emissions was conducted, and multiple geodatabases and satellite imagery were used in a consistent manner to improve the estimates of biogenic emissions over the continental United States. These emissions represent more realistic, higher resolution estimates of biogenic emissions (including those of highly reactive species such as isoprene). The impact of these emissions on tropospheric ozone levels was studied at a regional scale through the application of the USEPA's Community Multiscale Air Quality (CMAQ) model. Minor, but significant differences in the levels of ambient ozone were observed. In the second part of the thesis, an algorithm for estimating emissions of pollen particles from major allergenic tree and plant families in the United States was developed, extending the approach for modeling biogenic gas emissions in the Biogenic Emission Inventory System (BEIS). A spatio-temporal vegetation map was constructed from different remote sensing sources and local surveys, and was coupled with a meteorological model to develop pollen emissions rates. This model overcomes limitations posed by the lack of temporally resolved dynamic vegetation mapping in traditional pollen emission estimation methods. The pollen emissions model was applied to study the pollen emissions for North East US at 12 km resolution for comparison with ground level tree pollen data. A pollen transport model that simulates complex dispersion and deposition was developed through modifications to the USEPA's Community Multiscale Air Quality (CMAQ) model. The peak pollen emission predictions were within a day of peak pollen counts measured, thus corroborating independent model verification. Furthermore, the peak predicted pollen concentration estimates were within two days of the peak measured pollen counts, thus providing independent corroboration. The models for emissions and dispersion allow data-independent estimation of pollen levels, and provide an important component in assessing exposures of populations to pollen, especially under different climate change scenarios.
Friday, John
1974-01-01
A crest-stage gaging station provides an excellent means for determining peak water-surface elevations at a selected location on a stream channel. When related to streamflow, these data provide hydrologists with a knowledge of the flood experience of a drainage basin. If an adequate flood history is known, it is possible to estimate the probable magnitude and frequency of floods likely to occur in that basin, and this information is a valuable asset to anyone who must estimate design floods at proposed drainage structures. However, most design problems involve estimating peak flows on ungaged streams. This is difficult because the rate of storm runoff is not the same in all basins due to the influence of various basin characteristics which can either assist or retard the runoff. The crest-stage gaging program in Oregon is designed to provide a representative sampliing of peak flows at basins having a wide range in characteristics. Then, after sufficient data are collected, a statistical analysis can be made which will provide a means for estimating design floods at ungaged sites on the basis of known basin characteristics.This report is one of a series presenting a compilation of peak data collected at 232 crest-stage gaging stations in Oregon. The collection and publication of these data are made possible through mutual funding by State and Federal agencies. The Geological Survey, the Oregon State Highway Commission, the Federal Highway Administration, and the Bureau of Land Management are currently supporting 160 active crest-stage stations in Oregon.
Interpretation of the MEG-MUSIC scan in biomagnetic source localization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J.C.; Lewis, P.S.; Leahy, R.M.
1993-09-01
MEG-Music is a new approach to MEG source localization. MEG-Music is based on a spatio-temporal source model in which the observed biomagnetic fields are generated by a small number of current dipole sources with fixed positions/orientations and varying strengths. From the spatial covariance matrix of the observed fields, a signal subspace can be identified. The rank of this subspace is equal to the number of elemental sources present. This signal sub-space is used in a projection metric that scans the three dimensional head volume. Given a perfect signal subspace estimate and a perfect forward model, the metric will peak atmore » unity at each dipole location. In practice, the signal subspace estimate is contaminated by noise, which in turn yields MUSIC peaks which are less than unity. Previously we examined the lower bounds on localization error, independent of the choice of localization procedure. In this paper, we analyzed the effects of noise and temporal coherence on the signal subspace estimate and the resulting effects on the MEG-MUSIC peaks.« less
Analysis of Wien filter spectra from Hall thruster plumes.
Huang, Wensheng; Shastry, Rohit
2015-07-01
A method for analyzing the Wien filter spectra obtained from the plumes of Hall thrusters is derived and presented. The new method extends upon prior work by deriving the integration equations for the current and species fractions. Wien filter spectra from the plume of the NASA-300M Hall thruster are analyzed with the presented method and the results are used to examine key trends. The new integration method is found to produce results slightly different from the traditional area-under-the-curve method. The use of different velocity distribution forms when performing curve-fits to the peaks in the spectra is compared. Additional comparison is made with the scenario where the current fractions are assumed to be proportional to the heights of peaks. The comparison suggests that the calculated current fractions are not sensitive to the choice of form as long as both the height and width of the peaks are accounted for. Conversely, forms that only account for the height of the peaks produce inaccurate results. Also presented are the equations for estimating the uncertainty associated with applying curve fits and charge-exchange corrections. These uncertainty equations can be used to plan the geometry of the experimental setup.
Zero bias thermally stimulated currents in synthetic diamond
NASA Astrophysics Data System (ADS)
Mori, R.; Miglio, S.; Bruzzi, M.; Bogani, F.; De Sio, A.; Pace, E.
2009-06-01
Zero bias thermally stimulated currents (ZBTSCs) have been observed in single crystal high pressure high temperature (HPHT) and polycrystalline chemical vapor deposited (pCVD) diamond films. The ZBTSC technique is characterized by an increased sensitivity with respect to a standard TSC analysis. Due to the absence of the thermally activated background current, new TSC peaks have been observed in both HPHT and pCVD diamond films, related to shallow activation energies usually obscured by the emission of the dominant impurities. The ZBTSC peaks are explained in terms of defect discharge in the nonequilibrium potential distribution created by a nonuniform traps filling at the metal-diamond junctions. The electric field due to the charged defects has been estimated in a quasizero bias TSC experiment by applying an external bias.
Costa, John E.; Jarrett, Robert D.
2008-01-01
Thirty flood peak discharges determine the envelope curve of maximum floods documented in the United States by the U.S. Geological Survey. These floods occurred from 1927 to 1978 and are extraordinary not just in their magnitude, but in their hydraulic and geomorphic characteristics. The reliability of the computed discharge of these extraordinary floods was reviewed and evaluated using current (2007) best practices. Of the 30 flood peak discharges investigated, only 7 were measured at daily streamflow-gaging stations that existed when the flood occurred, and 23 were measured at miscellaneous (ungaged) sites. Methods used to measure these 30 extraordinary flood peak discharges consisted of 21 slope-area measurements, 2 direct current-meter measurements, 1 culvert measurement, 1 rating-curve extension, and 1 interpolation and rating-curve extension. The remaining four peak discharges were measured using combinations of culvert, slope-area, flow-over-road, and contracted-opening measurements. The method of peak discharge determination for one flood is unknown. Changes to peak discharge or rating are recommended for 20 of the 30 flood peak discharges that were evaluated. Nine floods retained published peak discharges, but their ratings were downgraded. For two floods, both peak discharge and rating were corrected and revised. Peak discharges for five floods that are subject to significant uncertainty due to complex field and hydraulic conditions, were re-rated as estimates. This study resulted in 5 of the 30 peak discharges having revised values greater than about 10 percent different from the original published values. Peak discharges were smaller for three floods (North Fork Hubbard Creek, Texas; El Rancho Arroyo, New Mexico; South Fork Wailua River, Hawaii), and two peak discharges were revised upward (Lahontan Reservoir tributary, Nevada; Bronco Creek, Arizona). Two peak discharges were indeterminate because they were concluded to have been debris flows with peak discharges that were estimated by an inappropriate method (slope-area) (Big Creek near Waynesville, North Carolina; Day Creek near Etiwanda, California). Original field notes and records could not be found for three of the floods, however, some data (copies of original materials, records of reviews) were available for two of these floods. A rating was assigned to each of seven peak discharges that had no rating. Errors identified in the reviews include misidentified flow processes, incorrect drainage areas for very small basins, incorrect latitude and longitude, improper field methods, arithmetic mistakes in hand calculations, omission of measured high flows when developing rating curves, and typographical errors. Common problems include use of two-section slope-area measurements, poor site selection, uncertainties in Manning's n-values, inadequate review, lost data files, and insufficient and inadequately described high-water marks. These floods also highlight the extreme difficulty in making indirect discharge measurements following extraordinary floods. Significantly, none of the indirect measurements are rated better than fair, which indicates the need to improve methodology to estimate peak discharge. Highly unsteady flow and resulting transient hydraulic phenomena, two-dimensional flow patterns, debris flows at streamflow-gaging stations, and the possibility of disconnected flow surfaces are examples of unresolved problems not well handled by current indirect discharge methodology. On the basis of a comprehensive review of 50,000 annual peak discharges and miscellaneous floods in California, problems with individual flood peak discharges would be expected to require a revision of discharge or rating curves at a rate no greater than about 0.10 percent of all floods. Many extraordinary floods create complex flow patterns and processes that cannot be adequately documented with quasi-steady, uniform one-dimensional analyses. These floods are most accura
Grand Forks - East Grand Forks Urban Water Resources Study. Flood Control Appendix.
1981-07-01
Reach 4) is served by an extensive network of roads 4 ,! and railroads. U.S. Highway -2, Demers Avenue, and Minnesota Avenue pro- vide easy access to...their current focus of employment and social activity. It would require the construction of a new transportation and utility network at immense local...115 205 (1) See figure 4. (2) Outside study area; not to be devoped . Table 2 - Estimated peak runoff 10-year frequency Peak flow Existing Future
Kirkham, Amy A; Pauhl, Katherine E; Elliott, Robyn M; Scott, Jen A; Doria, Silvana C; Davidson, Hanan K; Neil-Sztramko, Sarah E; Campbell, Kristin L; Camp, Pat G
2015-01-01
To determine the utility of equations that use the 6-minute walk test (6MWT) results to estimate peak oxygen uptake ((Equation is included in full-text article.)o2) and peak work rate with chronic obstructive pulmonary disease (COPD) patients in a clinical setting. This study included a systematic review to identify published equations estimating peak (Equation is included in full-text article.)o2 and peak work rate in watts in COPD patients and a retrospective chart review of data from a hospital-based pulmonary rehabilitation program. The following variables were abstracted from the records of 42 consecutively enrolled COPD patients: measured peak (Equation is included in full-text article.)o2 and peak work rate achieved during a cycle ergometer cardiopulmonary exercise test, 6MWT distance, age, sex, weight, height, forced expiratory volume in 1 second, forced vital capacity, and lung diffusion capacity. Estimated peak (Equation is included in full-text article.)o2 and peak work rate were estimated from 6MWT distance using published equations. The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work to prescribe aerobic exercise intensities of 60% and 80% was calculated. Eleven equations from 6 studies were identified. Agreement between estimated and measured values was poor to moderate (intraclass correlation coefficients = 0.11-0.63). The error associated with using estimated peak (Equation is included in full-text article.)o2 or peak work rate to prescribe exercise intensities of 60% and 80% of measured values ranged from mean differences of 12 to 35 and 16 to 47 percentage points, respectively. There is poor to moderate agreement between measured peak (Equation is included in full-text article.)o2 and peak work rate and estimations from equations that use 6MWT distance, and the use of the estimated values for prescription of aerobic exercise intensity would result in large error. Equations estimating peak (Equation is included in full-text article.)o2 and peak work rate are of low utility for prescribing exercise intensity in pulmonary rehabilitation programs.
Plasma bullet current measurements in a free-stream helium capillary jet
NASA Astrophysics Data System (ADS)
Oh, Jun-Seok; Walsh, James L.; Bradley, James W.
2012-06-01
A commercial current monitor has been used to measure the current associated with plasma bullets created in both the positive and negative half cycles of the sinusoidal driving voltage sustaining a plasma jet. The maximum values of the positive bullet current are typically ˜750 µA and persist for 10 µs, while the peaks in the negative current of several hundred μA are broad, persisting for about 40 µs. From the time delay of the current peaks with increasing distance from the jet nozzle, an average bullet propagation speed has been measured; the positive and negative bullets travel at 17.5 km s-1 and 3.9 km s-1 respectively. The net space charge associated with the bullet(s) has also been calculated; the positive and negative bullets contain a similar net charge of the order of 10-9 C measured at all monitor positions, with estimated charged particle densities nb of ˜1010-1011 cm-3 in the bullet.
Maximum current density and beam brightness achievable by laser-driven electron sources
NASA Astrophysics Data System (ADS)
Filippetto, D.; Musumeci, P.; Zolotorev, M.; Stupakov, G.
2014-02-01
This paper discusses the extension to different electron beam aspect ratio of the Child-Langmuir law for the maximum achievable current density in electron guns. Using a simple model, we derive quantitative formulas in good agreement with simulation codes. The new scaling laws for the peak current density of temporally long and transversely narrow initial beam distributions can be used to estimate the maximum beam brightness and suggest new paths for injector optimization.
Spainhour, John Christian G; Janech, Michael G; Schwacke, John H; Velez, Juan Carlos Q; Ramakrishnan, Viswanathan
2014-01-01
Matrix assisted laser desorption/ionization time-of-flight (MALDI-TOF) coupled with stable isotope standards (SIS) has been used to quantify native peptides. This peptide quantification by MALDI-TOF approach has difficulties quantifying samples containing peptides with ion currents in overlapping spectra. In these overlapping spectra the currents sum together, which modify the peak heights and make normal SIS estimation problematic. An approach using Gaussian mixtures based on known physical constants to model the isotopic cluster of a known compound is proposed here. The characteristics of this approach are examined for single and overlapping compounds. The approach is compared to two commonly used SIS quantification methods for single compound, namely Peak Intensity method and Riemann sum area under the curve (AUC) method. For studying the characteristics of the Gaussian mixture method, Angiotensin II, Angiotensin-2-10, and Angiotenisn-1-9 and their associated SIS peptides were used. The findings suggest, Gaussian mixture method has similar characteristics as the two methods compared for estimating the quantity of isolated isotopic clusters for single compounds. All three methods were tested using MALDI-TOF mass spectra collected for peptides of the renin-angiotensin system. The Gaussian mixture method accurately estimated the native to labeled ratio of several isolated angiotensin peptides (5.2% error in ratio estimation) with similar estimation errors to those calculated using peak intensity and Riemann sum AUC methods (5.9% and 7.7%, respectively). For overlapping angiotensin peptides, (where the other two methods are not applicable) the estimation error of the Gaussian mixture was 6.8%, which is within the acceptable range. In summary, for single compounds the Gaussian mixture method is equivalent or marginally superior compared to the existing methods of peptide quantification and is capable of quantifying overlapping (convolved) peptides within the acceptable margin of error.
Mays, Ryan J.; Boér, Nicholas F.; Mealey, Lisa M.; Kim, Kevin H.; Goss, Fredric L.
2015-01-01
This investigation compared estimated and predicted peak oxygen consumption (VO2peak) and maximal heart rate (HRmax) among the treadmill, cycle ergometer and elliptical ergometer. Seventeen women (mean ± SE: 21.9 ± .3 yrs) exercised to exhaustion on all modalities. ACSM metabolic equations were used to estimate VO2peak. Digital displays on the elliptical ergometer were used to estimate VO2peak. Two individual linear regression methods were used to predict VO2peak: 1) two steady state heart rate (HR) responses up to 85% of age-predicted HRmax, and 2) multiple steady state/non-steady state HR responses up to 85% of age-predicted HRmax. Estimated VO2peak for the treadmill (46.3 ± 1.3 ml · kg−1 · min−1) and the elliptical ergometer (44.4 ± 1.0 ml · kg−1 · min−1) did not differ. The cycle ergometer estimated VO2peak (36.5 ± 1.0 ml · kg−1 · min−1) was lower (p < .001) than the estimated VO2peak values for the treadmill and elliptical ergometer. Elliptical ergometer VO2peak predicted from steady state (51.4 ± .8 ml · kg−1 · min−1) and steady state/non-steady state (50.3 ± 2.0 ml · kg−1 · min−1) models were higher than estimated elliptical ergometer VO2peak, p < .01. HRmax and estimates of VO2peak were similar between the treadmill and elliptical ergometer, thus cross-modal exercise prescriptions may be generated. The use of digital display estimates of submaximal oxygen uptake for the elliptical ergometer may not be an accurate method for predicting VO2peak. Health-fitness professionals should use caution when utilizing submaximal elliptical ergometer digital display estimates to predict VO2peak. PMID:20393357
Peak-flow characteristics of Virginia streams
Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute
2011-01-01
Peak-flow annual exceedance probabilities, also called probability-percent chance flow estimates, and regional regression equations are provided describing the peak-flow characteristics of Virginia streams. Statistical methods are used to evaluate peak-flow data. Analysis of Virginia peak-flow data collected from 1895 through 2007 is summarized. Methods are provided for estimating unregulated peak flow of gaged and ungaged streams. Station peak-flow characteristics identified by fitting the logarithms of annual peak flows to a Log Pearson Type III frequency distribution yield annual exceedance probabilities of 0.5, 0.4292, 0.2, 0.1, 0.04, 0.02, 0.01, 0.005, and 0.002 for 476 streamgaging stations. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression model equations for six physiographic regions to estimate regional annual exceedance probabilities at gaged and ungaged sites. Weighted peak-flow values that combine annual exceedance probabilities computed from gaging station data and from regional regression equations provide improved peak-flow estimates. Text, figures, and lists are provided summarizing selected peak-flow sites, delineated physiographic regions, peak-flow estimates, basin characteristics, regional regression model equations, error estimates, definitions, data sources, and candidate regression model equations. This study supersedes previous studies of peak flows in Virginia.
Lewis, Jason M.
2010-01-01
Peak-streamflow regression equations were determined for estimating flows with exceedance probabilities from 50 to 0.2 percent for the state of Oklahoma. These regression equations incorporate basin characteristics to estimate peak-streamflow magnitude and frequency throughout the state by use of a generalized least squares regression analysis. The most statistically significant independent variables required to estimate peak-streamflow magnitude and frequency for unregulated streams in Oklahoma are contributing drainage area, mean-annual precipitation, and main-channel slope. The regression equations are applicable for watershed basins with drainage areas less than 2,510 square miles that are not affected by regulation. The resulting regression equations had a standard model error ranging from 31 to 46 percent. Annual-maximum peak flows observed at 231 streamflow-gaging stations through water year 2008 were used for the regression analysis. Gage peak-streamflow estimates were used from previous work unless 2008 gaging-station data were available, in which new peak-streamflow estimates were calculated. The U.S. Geological Survey StreamStats web application was used to obtain the independent variables required for the peak-streamflow regression equations. Limitations on the use of the regression equations and the reliability of regression estimates for natural unregulated streams are described. Log-Pearson Type III analysis information, basin and climate characteristics, and the peak-streamflow frequency estimates for the 231 gaging stations in and near Oklahoma are listed. Methodologies are presented to estimate peak streamflows at ungaged sites by using estimates from gaging stations on unregulated streams. For ungaged sites on urban streams and streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow magnitude and frequency.
NASA Technical Reports Server (NTRS)
Mach, Douglas M.; Rust, W. D.
1993-01-01
Velocities, optical risetimes, and transmission line model peak currents for seven natural positive return strokes are reported. The average 2D positive return stroke velocity for channel segments of less than 500 m in length starting near the base of the channel is 0.8 +/- 0.3 x 10 exp 8 m/s, which is slower than the present corresponding average velocity for natural negative first return strokes of 1.7 +/- 0.7 x 10 exp 8/s. It is inferred that positive stroke peak currents in the literature, which assume the same velocity as negative strokes, are low by a factor of 2. The average 2D positive return stroke velocity for channel segments of greater than 500 m starting near the base of the channel is 0.9 +/- 0.4 x 10 exp 8 m/s. The corresponding average velocity for the present natural negative first strokes is 1.2 +/- 0.6 x 10 exp 8 m/s. No significant velocity change with height is found for positive return strokes.
Three years of lightning impulse charge moment change measurements in the United States
NASA Astrophysics Data System (ADS)
Cummer, Steven A.; Lyons, Walter A.; Stanley, Mark A.
2013-06-01
We report and analyze 3 years of lightning impulse charge moment change (iCMC) measurements obtained from an automated, real time lightning charge moment change network (CMCN). The CMCN combines U.S. National Lightning Detection Network (NLDN) lightning event geolocations with extremely low frequency (≲1 kHz) data from two stations to provide iCMC measurements across the entire United States. Almost 14 million lightning events were measured in the 3 year period. We present the statistical distributions of iCMC versus polarity and NLDN-measured peak current, including corrections for the detection efficiency of the CMCN versus peak current. We find a broad distribution of iCMC for a given peak current, implying that these parameters are at best only weakly correlated. Curiously, the mean iCMC does not monotonically increase with peak current, and in fact, drops for positive CG strokes above +150 kA. For all positive strokes, there is a boundary near 20 C km that separates seemingly distinct populations of high and low iCMC strokes. We also explore the geographic distribution of high iCMC lightning strokes. High iCMC positive strokes occur predominantly in the northern midwest portion of the U.S., with a secondary peak over the gulf stream region just off the U.S. east coast. High iCMC negative strokes are also clustered in the midwest, although somewhat south of most of the high iCMC positive strokes. This is a region far from the locations of maximum occurrence of high peak current negative strokes. Based on assumed iCMC thresholds for sprite production, we estimate that approximately 35,000 positive polarity and 350 negative polarity sprites occur per year over the U.S. land and near-coastal areas. Among other applications, this network is useful for the nowcasting of sprite-producing storms and storm regions.
NASA Astrophysics Data System (ADS)
Faudot, E.; Heuraux, S.; Colas, L.
2005-09-01
Understanding DC potential generation in front of ICRF antennas is crucial for long pulse high RF power systems. DC potentials are produced by sheath rectification of these RF potentials. To reach this goal, near RF parallel electric fields have to be computed in 3D and integrated along open magnetic field lines to yield a 2D RF potential map in a transverse plane. DC potentials are produced by sheath rectification of these RF potentials. As RF potentials are spatially inhomogeneous, transverse polarization currents are created, modifying RF and DC maps. Such modifications are quantified on a `test map' having initially a Gaussian shape and assuming that the map remains Gaussian near its summit,the time behavior of the peak can be estimated analytically in presence of polarization current as a function of its width r0 and amplitude φ0 (normalized to a characteristic length for transverse transport and to the local temperature). A `peaking factor' is built from the DC peak potential normalized to φ0, and validated with a 2D fluid code and a 2D PIC code (XOOPIC). In an unexpected way transverse currents can increase this factor. Realistic situations of a Tore Supra antenna are also studied, with self-consistent near fields provided by ICANT code. Basic processes will be detailed and an evaluation of the `peaking factor' for ITER will be presented for a given configuration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faudot, E.; Heuraux, S.; Colas, L.
2005-09-26
Understanding DC potential generation in front of ICRF antennas is crucial for long pulse high RF power systems. DC potentials are produced by sheath rectification of these RF potentials. To reach this goal, near RF parallel electric fields have to be computed in 3D and integrated along open magnetic field lines to yield a 2D RF potential map in a transverse plane. DC potentials are produced by sheath rectification of these RF potentials. As RF potentials are spatially inhomogeneous, transverse polarization currents are created, modifying RF and DC maps. Such modifications are quantified on a 'test map' having initially amore » Gaussian shape and assuming that the map remains Gaussian near its summit,the time behavior of the peak can be estimated analytically in presence of polarization current as a function of its width r0 and amplitude {phi}0 (normalized to a characteristic length for transverse transport and to the local temperature). A 'peaking factor' is built from the DC peak potential normalized to {phi}0, and validated with a 2D fluid code and a 2D PIC code (XOOPIC). In an unexpected way transverse currents can increase this factor. Realistic situations of a Tore Supra antenna are also studied, with self-consistent near fields provided by ICANT code. Basic processes will be detailed and an evaluation of the 'peaking factor' for ITER will be presented for a given configuration.« less
Flooding and sedimentation in Wheeling Creek basin, Belmont County, Ohio
Kolva, J.R.; Koltun, G.F.
1987-01-01
The Wheeling Creek basin, which is located primarily in Belmont County, Ohio, experienced three damaging floods and four less severe floods during the 29-month period from February 1979 through June 1981. Residents of the basin became concerned about factors that could have affected the severity and frequency of out-of-bank floods. In response to those concerns, the U.S. Geological Survey, in cooperation with the Ohio Department of Natural Resources, undertook a study to estimate peak discharges and recurrence intervals for the seven floods of interest, provide information on current and historical mining-related stream-channel fill or scour, and examine storm-period subbasin contributions to the sediment load in Wheeling Creek. Streamflow data for adjacent basins, rainfall data, and, in two cases, flood-profile data were used in conjunction with streamflow data subsequently collected on Wheeling Creek to provide estimates of peak discharge for the seven floods that occurred from February 1979 through June 1981. Estimates of recurrence intervals were assigned to the Peak discharges on the basin of regional regression equations that relate selected basin characteristics to peak discharge with fixed recurrence intervals. These estimates indicate that a statistically unusual number of floods with recurrence intervals of 2 years or more occurred within that time period. Three cross sections located on Wheeling Creek and four located on tributaries were established and surveyed quarterly for approximately 2 years. No evidence of appreciable stream-channel fill or scour was observed at any of the cross sections, although minor profile changes were apparent at some locations. Attempts were made to obtain historical cross-section profile data for comparison with current cross-section profiles; however, no usable data were found. Excavations of stream-bottom materials were made near the three main-stem cross-section locations and near the mouth of Jug Run. The bottom materials were examined for evidence of recently deposited sediments of mining-related origin. The only evidence of appreciable mining-related sediment deposition was found at Jug Run, and, to a lesser extent, at one main-stem site.
Velocity spectrum for the Iranian plateau
NASA Astrophysics Data System (ADS)
Bastami, Morteza; Soghrat, M. R.
2018-01-01
Peak ground acceleration (PGA) and spectral acceleration values have been proposed in most building codes/guidelines, unlike spectral velocity (SV) and peak ground velocity (PGV). Recent studies have demonstrated the importance of spectral velocity and peak ground velocity in the design of long period structures (e.g., pipelines, tunnels, tanks, and high-rise buildings) and evaluation of seismic vulnerability in underground structures. The current study was undertaken to develop a velocity spectrum and for estimation of PGV. In order to determine these parameters, 398 three-component accelerograms recorded by the Building and Housing Research Center (BHRC) were used. The moment magnitude (Mw) in the selected database was 4.1 to 7.3, and the events occurred after 1977. In the database, the average shear-wave velocity at 0 to 30 m in depth (Vs30) was available for only 217 records; thus, the site class for the remaining was estimated using empirical methods. Because of the importance of the velocity spectrum at low frequencies, the signal-to-noise ratio of 2 was chosen for determination of the low and high frequency to include a wider range of frequency content. This value can produce conservative results. After estimation of the shape of the velocity design spectrum, the PGV was also estimated for the region under study by finding the correlation between PGV and spectral acceleration at the period of 1 s.
Approximation of wave action flux velocity in strongly sheared mean flows
NASA Astrophysics Data System (ADS)
Banihashemi, Saeideh; Kirby, James T.; Dong, Zhifei
2017-08-01
Spectral wave models based on the wave action equation typically use a theoretical framework based on depth uniform current to account for current effects on waves. In the real world, however, currents often have variations over depth. Several recent studies have made use of a depth-weighted current U˜ due to [Skop, R. A., 1987. Approximate dispersion relation for wave-current interactions. J. Waterway, Port, Coastal, and Ocean Eng. 113, 187-195.] or [Kirby, J. T., Chen, T., 1989. Surface waves on vertically sheared flows: approximate dispersion relations. J. Geophys. Res. 94, 1013-1027.] in order to account for the effect of vertical current shear. Use of the depth-weighted velocity, which is a function of wavenumber (or frequency and direction) has been further simplified in recent applications by only utilizing a weighted current based on the spectral peak wavenumber. These applications do not typically take into account the dependence of U˜ on wave number k, as well as erroneously identifying U˜ as the proper choice for current velocity in the wave action equation. Here, we derive a corrected expression for the current component of the group velocity. We demonstrate its consistency using analytic results for a current with constant vorticity, and numerical results for a measured, strongly-sheared current profile obtained in the Columbia River. The effect of choosing a single value for current velocity based on the peak wave frequency is examined, and we suggest an alternate strategy, involving a Taylor series expansion about the peak frequency, which should significantly extend the range of accuracy of current estimates available to the wave model with minimal additional programming and data transfer.
Characteristics of the April 2007 Flood at 10 Streamflow-Gaging Stations in Massachusetts
Zarriello, Phillip J.; Carlson, Carl S.
2009-01-01
A large 'nor'easter' storm on April 15-18, 2007, brought heavy rains to the southern New England region that, coupled with normal seasonal high flows and associated wet soil-moisture conditions, caused extensive flooding in many parts of Massachusetts and neighboring states. To characterize the magnitude of the April 2007 flood, a peak-flow frequency analysis was undertaken at 10 selected streamflow-gaging stations in Massachusetts to determine the magnitude of flood flows at 5-, 10-, 25-, 50-, 100-, 200-, and 500-year return intervals. The magnitude of flood flows at various return intervals were determined from the logarithms of the annual peaks fit to a Pearson Type III probability distribution. Analysis included augmenting the station record with longer-term records from one or more nearby stations to provide a common period of comparison that includes notable floods in 1936, 1938, and 1955. The April 2007 peak flow was among the highest recorded or estimated since 1936, often ranking between the 3d and 5th highest peak for that period. In general, the peak-flow frequency analysis indicates the April 2007 peak flow has an estimated return interval between 25 and 50 years; at stations in the northeastern and central areas of the state, the storm was less severe resulting in flows with return intervals of about 5 and 10 years, respectively. At Merrimack River at Lowell, the April 2007 peak flow approached a 100-year return interval that was computed from post-flood control records and the 1936 and 1938 peak flows adjusted for flood control. In general, the magnitude of flood flow for a given return interval computed from the streamflow-gaging station period-of-record was greater than those used to calculate flood profiles in various community flood-insurance studies. In addition, the magnitude of the updated flood flow and current (2008) stage-discharge relation at a given streamflow-gaging station often produced a flood stage that was considerably different than the flood stage indicated in the flood-insurance study flood profile at that station. Equations for estimating the flow magnitudes for 5-, 10-, 25-, 50-, 100-, 200-, and 500-year floods were developed from the relation of the magnitude of flood flows to drainage area calculated from the six streamflow-gaging stations with the longest unaltered record. These equations produced a more conservative estimate of flood flows (higher discharges) than the existing regional equations for estimating flood flows at ungaged rivers in Massachusetts. Large differences in the magnitude of flood flows for various return intervals determined in this study compared to results from existing regional equations and flood insurance studies indicate a need for updating regional analyses and equations for estimating the expected magnitude of flood flows in Massachusetts.
Characterization of amine-functionalized electrode for aqueous carbon dioxide (CO2) direct detection
NASA Astrophysics Data System (ADS)
Sato, Hiroshi
2017-03-01
In this study, fabrication of amino groups and ferrocenes co-modified sensor electrode and electrochemical detection of carbon dioxide (CO2) in the saline solution is reported. Electrochemical detection of CO2 was carried out using cyclic voltammetry in saline solution containing sodium bicarbonate as CO2 source. Oxidation and reduction peak current intensities computed from cyclic voltammograms varied as a function of concentration of CO2 molecules. The calibration curve was obtained by plotting oxidation peak current intensities as a function of CO2 concentration. The sensor electrode prepared in this study can estimate the differences between concentrations of CO2 in normal seawater up to 10 times higher. Furthermore, the surface analysis was performed to clarify the CO2 detection mechanism.
Mineral resources: Reserves, peak production and the future
Meinert, Lawrence D.; Robinson, Gilpin; Nassar, Nedal
2016-01-01
The adequacy of mineral resources in light of population growth and rising standards of living has been a concern since the time of Malthus (1798), but many studies erroneously forecast impending peak production or exhaustion because they confuse reserves with “all there is”. Reserves are formally defined as a subset of resources, and even current and potential resources are only a small subset of “all there is”. Peak production or exhaustion cannot be modeled accurately from reserves. Using copper as an example, identified resources are twice as large as the amount projected to be needed through 2050. Estimates of yet-to-be discovered copper resources are up to 40-times more than currently-identified resources, amounts that could last for many centuries. Thus, forecasts of imminent peak production due to resource exhaustion in the next 20–30 years are not valid. Short-term supply problems may arise, however, and supply-chain disruptions are possible at any time due to natural disasters (earthquakes, tsunamis, hurricanes) or political complications. Needed to resolve these problems are education and exploration technology development, access to prospective terrain, better recycling and better accounting of externalities associated with production (pollution, loss of ecosystem services and water and energy use).
Reduced event-related current density in the anterior cingulate cortex in schizophrenia.
Mulert, C; Gallinat, J; Pascual-Marqui, R; Dorn, H; Frick, K; Schlattmann, P; Mientus, S; Herrmann, W M; Winterer, G
2001-04-01
There is good evidence from neuroanatomic postmortem and functional imaging studies that dysfunction of the anterior cingulate cortex plays a prominent role in the pathophysiology of schizophrenia. So far, no electrophysiological localization study has been performed to investigate this deficit. We investigated 18 drug-free schizophrenic patients and 25 normal subjects with an auditory choice reaction task and measured event-related activity with 19 electrodes. Estimation of the current source density distribution in Talairach space was performed with low-resolution electromagnetic tomography (LORETA). In normals, we could differentiate between an early event-related potential peak of the N1 (90-100 ms) and a later N1 peak (120-130 ms). Subsequent current-density LORETA analysis in Talairach space showed increased activity in the auditory cortex area during the first N1 peak and increased activity in the anterior cingulate gyrus during the second N1 peak. No activation difference was observed in the auditory cortex between normals and patients with schizophrenia. However, schizophrenics showed significantly less anterior cingulate gyrus activation and slowed reaction times. Our results confirm previous findings of an electrical source in the anterior cingulate and an anterior cingulate dysfunction in schizophrenics. Our data also suggest that anterior cingulate function in schizophrenics is disturbed at a relatively early time point in the information-processing stream (100-140 ms poststimulus). Copyright 2001 Academic Press.
Wijetunge, Chalini D; Saeed, Isaam; Boughton, Berin A; Roessner, Ute; Halgamuge, Saman K
2015-01-01
Mass Spectrometry (MS) is a ubiquitous analytical tool in biological research and is used to measure the mass-to-charge ratio of bio-molecules. Peak detection is the essential first step in MS data analysis. Precise estimation of peak parameters such as peak summit location and peak area are critical to identify underlying bio-molecules and to estimate their abundances accurately. We propose a new method to detect and quantify peaks in mass spectra. It uses dual-tree complex wavelet transformation along with Stein's unbiased risk estimator for spectra smoothing. Then, a new method, based on the modified Asymmetric Pseudo-Voigt (mAPV) model and hierarchical particle swarm optimization, is used for peak parameter estimation. Using simulated data, we demonstrated the benefit of using the mAPV model over Gaussian, Lorentz and Bi-Gaussian functions for MS peak modelling. The proposed mAPV model achieved the best fitting accuracy for asymmetric peaks, with lower percentage errors in peak summit location estimation, which were 0.17% to 4.46% less than that of the other models. It also outperformed the other models in peak area estimation, delivering lower percentage errors, which were about 0.7% less than its closest competitor - the Bi-Gaussian model. In addition, using data generated from a MALDI-TOF computer model, we showed that the proposed overall algorithm outperformed the existing methods mainly in terms of sensitivity. It achieved a sensitivity of 85%, compared to 77% and 71% of the two benchmark algorithms, continuous wavelet transformation based method and Cromwell respectively. The proposed algorithm is particularly useful for peak detection and parameter estimation in MS data with overlapping peak distributions and asymmetric peaks. The algorithm is implemented using MATLAB and the source code is freely available at http://mapv.sourceforge.net.
2015-01-01
Background Mass Spectrometry (MS) is a ubiquitous analytical tool in biological research and is used to measure the mass-to-charge ratio of bio-molecules. Peak detection is the essential first step in MS data analysis. Precise estimation of peak parameters such as peak summit location and peak area are critical to identify underlying bio-molecules and to estimate their abundances accurately. We propose a new method to detect and quantify peaks in mass spectra. It uses dual-tree complex wavelet transformation along with Stein's unbiased risk estimator for spectra smoothing. Then, a new method, based on the modified Asymmetric Pseudo-Voigt (mAPV) model and hierarchical particle swarm optimization, is used for peak parameter estimation. Results Using simulated data, we demonstrated the benefit of using the mAPV model over Gaussian, Lorentz and Bi-Gaussian functions for MS peak modelling. The proposed mAPV model achieved the best fitting accuracy for asymmetric peaks, with lower percentage errors in peak summit location estimation, which were 0.17% to 4.46% less than that of the other models. It also outperformed the other models in peak area estimation, delivering lower percentage errors, which were about 0.7% less than its closest competitor - the Bi-Gaussian model. In addition, using data generated from a MALDI-TOF computer model, we showed that the proposed overall algorithm outperformed the existing methods mainly in terms of sensitivity. It achieved a sensitivity of 85%, compared to 77% and 71% of the two benchmark algorithms, continuous wavelet transformation based method and Cromwell respectively. Conclusions The proposed algorithm is particularly useful for peak detection and parameter estimation in MS data with overlapping peak distributions and asymmetric peaks. The algorithm is implemented using MATLAB and the source code is freely available at http://mapv.sourceforge.net. PMID:26680279
Tortorelli, Robert L.
1997-01-01
Statewide regression equations for Oklahoma were determined for estimating peak discharge and flood frequency for selected recurrence intervals from 2 to 500 years for ungaged sites on natural unregulated streams. The most significant independent variables required to estimate peak-streamflow frequency for natural unregulated streams in Oklahoma are contributing drainage area, main-channel slope, and mean-annual precipitation. The regression equations are applicable for watersheds with drainage areas less than 2,510 square miles that are not affected by regulation from manmade works. Limitations on the use of the regression relations and the reliability of regression estimates for natural unregulated streams are discussed. Log-Pearson Type III analysis information, basin and climatic characteristics, and the peak-stream-flow frequency estimates for 251 gaging stations in Oklahoma and adjacent states are listed. Techniques are presented to make a peak-streamflow frequency estimate for gaged sites on natural unregulated streams and to use this result to estimate a nearby ungaged site on the same stream. For ungaged sites on urban streams, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow frequency. For ungaged sites on streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow frequency. The statewide regression equations are adjusted by substituting the drainage area below the floodwater retarding structures, or drainage area that represents the percentage of the unregulated basin, in the contributing drainage area parameter to obtain peak-streamflow frequency estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao
Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less
Bradley, D. Nathan
2013-01-01
The peak discharge of a flood can be estimated from the elevation of high-water marks near the inlet and outlet of a culvert after the flood has occurred. This type of discharge estimate is called an “indirect measurement” because it relies on evidence left behind by the flood, such as high-water marks on trees or buildings. When combined with the cross-sectional geometry of the channel upstream from the culvert and the culvert size, shape, roughness, and orientation, the high-water marks define a water-surface profile that can be used to estimate the peak discharge by using the methods described by Bodhaine (1968). This type of measurement is in contrast to a “direct” measurement of discharge made during the flood where cross-sectional area is measured and a current meter or acoustic equipment is used to measure the water velocity. When a direct discharge measurement cannot be made at a streamgage during high flows because of logistics or safety reasons, an indirect measurement of a peak discharge is useful for defining the high-flow section of the stage-discharge relation (rating curve) at the streamgage, resulting in more accurate computation of high flows. The Culvert Analysis Program (CAP) (Fulford, 1998) is a command-line program written in Fortran for computing peak discharges and culvert rating surfaces or curves. CAP reads input data from a formatted text file and prints results to another formatted text file. Preparing and correctly formatting the input file may be time-consuming and prone to errors. This document describes the CAP graphical user interface (GUI)—a modern, cross-platform, menu-driven application that prepares the CAP input file, executes the program, and helps the user interpret the output
NASA Astrophysics Data System (ADS)
Gannon, J. L.; Birchfield, A. B.; Shetye, K. S.; Overbye, T. J.
2017-11-01
Geomagnetically induced currents (GICs) are a result of the changing magnetic fields during a geomagnetic disturbance interacting with the deep conductivity structures of the Earth. When assessing GIC hazard, it is a common practice to use layer-cake or one-dimensional conductivity models to approximate deep Earth conductivity. In this paper, we calculate the electric field and estimate GICs induced in the long lines of a realistic system model of the Pacific Northwest, using the traditional 1-D models, as well as 3-D models represented by Earthscope's Electromagnetic transfer functions. The results show that the peak electric field during a given event has considerable variation across the analysis region in the Pacific Northwest, but the 1-D physiographic approximations may accurately represent the average response of an area, although corrections are needed. Rotations caused by real deep Earth conductivity structures greatly affect the direction of the induced electric field. This effect may be just as, or more, important than peak intensity when estimating GICs induced in long bulk power system lines.
A simple algorithm to compute the peak power output of GaAs/Ge solar cells on the Martian surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glueck, P.R.; Bahrami, K.A.
1995-12-31
The Jet Propulsion Laboratory`s (JPL`s) Mars Pathfinder Project will deploy a robotic ``microrover`` on the surface of Mars in the summer of 1997. This vehicle will derive primary power from a GaAs/Ge solar array during the day and will ``sleep`` at night. This strategy requires that the rover be able to (1) determine when it is necessary to save the contents of volatile memory late in the afternoon and (2) determine when sufficient power is available to resume operations in the morning. An algorithm was developed that estimates the peak power point of the solar array from the solar arraymore » short-circuit current and temperature telemetry, and provides functional redundancy for both measurements using the open-circuit voltage telemetry. The algorithm minimizes vehicle processing and memory utilization by using linear equations instead of look-up tables to estimate peak power with very little loss in accuracy. This paper describes the method used to obtain the algorithm and presents the detailed algorithm design.« less
Kennedy, Jeffrey R.; Paretti, Nicholas V.
2014-01-01
Flooding in urban areas routinely causes severe damage to property and often results in loss of life. To investigate the effect of urbanization on the magnitude and frequency of flood peaks, a flood frequency analysis was carried out using data from urbanized streamgaging stations in Phoenix and Tucson, Arizona. Flood peaks at each station were predicted using the log-Pearson Type III distribution, fitted using the expected moments algorithm and the multiple Grubbs-Beck low outlier test. The station estimates were then compared to flood peaks estimated by rural-regression equations for Arizona, and to flood peaks adjusted for urbanization using a previously developed procedure for adjusting U.S. Geological Survey rural regression peak discharges in an urban setting. Only smaller, more common flood peaks at the 50-, 20-, 10-, and 4-percent annual exceedance probabilities (AEPs) demonstrate any increase in magnitude as a result of urbanization; the 1-, 0.5-, and 0.2-percent AEP flood estimates are predicted without bias by the rural-regression equations. Percent imperviousness was determined not to account for the difference in estimated flood peaks between stations, either when adjusting the rural-regression equations or when deriving urban-regression equations to predict flood peaks directly from basin characteristics. Comparison with urban adjustment equations indicates that flood peaks are systematically overestimated if the rural-regression-estimated flood peaks are adjusted upward to account for urbanization. At nearly every streamgaging station in the analysis, adjusted rural-regression estimates were greater than the estimates derived using station data. One likely reason for the lack of increase in flood peaks with urbanization is the presence of significant stormwater retention and detention structures within the watershed used in the study.
NASA Astrophysics Data System (ADS)
Zhu, Meng-Hua; Liu, Liang-Gang; You, Zhong; Xu, Ao-Ao
2009-03-01
In this paper, a heuristic approach based on Slavic's peak searching method has been employed to estimate the width of peak regions for background removing. Synthetic and experimental data are used to test this method. With the estimated peak regions using the proposed method in the whole spectrum, we find it is simple and effective enough to be used together with the Statistics-sensitive Nonlinear Iterative Peak-Clipping method.
Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals
Hodgkins, Glenn A.; Martin, Gary R.
2003-01-01
This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.
Asquith, William H.; Cleveland, Theodore G.; Roussel, Meghan C.
2011-01-01
Estimates of peak and time of peak streamflow for small watersheds (less than about 640 acres) in a suburban to urban, low-slope setting are needed for drainage design that is cost-effective and risk-mitigated. During 2007-10, the U.S. Geological Survey (USGS), in cooperation with the Harris County Flood Control District and the Texas Department of Transportation, developed a method to estimate peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area. To develop the method, 24 watersheds in the study area with drainage areas less than about 3.5 square miles (2,240 acres) and with concomitant rainfall and runoff data were selected. The method is based on conjunctive analysis of rainfall and runoff data in the context of the unit hydrograph method and the rational method. For the unit hydrograph analysis, a gamma distribution model of unit hydrograph shape (a gamma unit hydrograph) was chosen and parameters estimated through matching of modeled peak and time of peak streamflow to observed values on a storm-by-storm basis. Watershed mean or watershed-specific values of peak and time to peak ("time to peak" is a parameter of the gamma unit hydrograph and is distinct from "time of peak") of the gamma unit hydrograph were computed. Two regression equations to estimate peak and time to peak of the gamma unit hydrograph that are based on watershed characteristics of drainage area and basin-development factor (BDF) were developed. For the rational method analysis, a lag time (time-R), volumetric runoff coefficient, and runoff coefficient were computed on a storm-by-storm basis. Watershed-specific values of these three metrics were computed. A regression equation to estimate time-R based on drainage area and BDF was developed. Overall arithmetic means of volumetric runoff coefficient (0.41 dimensionless) and runoff coefficient (0.25 dimensionless) for the 24 watersheds were used to express the rational method in terms of excess rainfall (the excess rational method). Both the unit hydrograph method and excess rational method are shown to provide similar estimates of peak and time of peak streamflow. The results from the two methods can be combined by using arithmetic means. A nomograph is provided that shows the respective relations between the arithmetic-mean peak and time of peak streamflow to drainage areas ranging from 10 to 640 acres. The nomograph also shows the respective relations for selected BDF ranging from undeveloped to fully developed conditions. The nomograph represents the peak streamflow for 1 inch of excess rainfall based on drainage area and BDF; the peak streamflow for design storms from the nomograph can be multiplied by the excess rainfall to estimate peak streamflow. Time of peak streamflow is readily obtained from the nomograph. Therefore, given excess rainfall values derived from watershed-loss models, which are beyond the scope of this report, the nomograph represents a method for estimating peak and time of peak streamflow for applicable watersheds in the Houston metropolitan area. Lastly, analysis of the relative influence of BDF on peak streamflow is provided, and the results indicate a 0:04log10 cubic feet per second change of peak streamflow per positive unit of change in BDF. This relative change can be used to adjust peak streamflow from the method or other hydrologic methods for a given BDF to other BDF values; example computations are provided.
Lorenz, David L.; Sanocki, Chris A.; Kocian, Matthew J.
2010-01-01
Knowledge of the peak flow of floods of a given recurrence interval is essential for regulation and planning of water resources and for design of bridges, culverts, and dams along Minnesota's rivers and streams. Statistical techniques are needed to estimate peak flow at ungaged sites because long-term streamflow records are available at relatively few places. Because of the need to have up-to-date peak-flow frequency information in order to estimate peak flows at ungaged sites, the U.S. Geological Survey (USGS) conducted a peak-flow frequency study in cooperation with the Minnesota Department of Transportation and the Minnesota Pollution Control Agency. Estimates of peak-flow magnitudes for 1.5-, 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals are presented for 330 streamflow-gaging stations in Minnesota and adjacent areas in Iowa and South Dakota based on data through water year 2005. The peak-flow frequency information was subsequently used in regression analyses to develop equations relating peak flows for selected recurrence intervals to various basin and climatic characteristics. Two statistically derived techniques-regional regression equation and region of influence regression-can be used to estimate peak flow on ungaged streams smaller than 3,000 square miles in Minnesota. Regional regression equations were developed for selected recurrence intervals in each of six regions in Minnesota: A (northwestern), B (north central and east central), C (northeastern), D (west central and south central), E (southwestern), and F (southeastern). The regression equations can be used to estimate peak flows at ungaged sites. The region of influence regression technique dynamically selects streamflow-gaging stations with characteristics similar to a site of interest. Thus, the region of influence regression technique allows use of a potentially unique set of gaging stations for estimating peak flow at each site of interest. Two methods of selecting streamflow-gaging stations, similarity and proximity, can be used for the region of influence regression technique. The regional regression equation technique is the preferred technique as an estimate of peak flow in all six regions for ungaged sites. The region of influence regression technique is not appropriate for regions C, E, and F because the interrelations of some characteristics of those regions do not agree with the interrelations throughout the rest of the State. Both the similarity and proximity methods for the region of influence technique can be used in the other regions (A, B, and D) to provide additional estimates of peak flow. The peak-flow-frequency estimates and basin characteristics for selected streamflow-gaging stations and regional peak-flow regression equations are included in this report.
Focused ultrasound transducer spatial peak intensity estimation: a comparison of methods
NASA Astrophysics Data System (ADS)
Civale, John; Rivens, Ian; Shaw, Adam; ter Haar, Gail
2018-03-01
Characterisation of the spatial peak intensity at the focus of high intensity focused ultrasound transducers is difficult because of the risk of damage to hydrophone sensors at the high focal pressures generated. Hill et al (1994 Ultrasound Med. Biol. 20 259-69) provided a simple equation for estimating spatial-peak intensity for solid spherical bowl transducers using measured acoustic power and focal beamwidth. This paper demonstrates theoretically and experimentally that this expression is only strictly valid for spherical bowl transducers without a central (imaging) aperture. A hole in the centre of the transducer results in over-estimation of the peak intensity. Improved strategies for determining focal peak intensity from a measurement of total acoustic power are proposed. Four methods are compared: (i) a solid spherical bowl approximation (after Hill et al 1994 Ultrasound Med. Biol. 20 259-69), (ii) a numerical method derived from theory, (iii) a method using measured sidelobe to focal peak pressure ratio, and (iv) a method for measuring the focal power fraction (FPF) experimentally. Spatial-peak intensities were estimated for 8 transducers at three drive powers levels: low (approximately 1 W), moderate (~10 W) and high (20-70 W). The calculated intensities were compared with those derived from focal peak pressure measurements made using a calibrated hydrophone. The FPF measurement method was found to provide focal peak intensity estimates that agreed most closely (within 15%) with the hydrophone measurements, followed by the pressure ratio method (within 20%). The numerical method was found to consistently over-estimate focal peak intensity (+40% on average), however, for transducers with a central hole it was more accurate than using the solid bowl assumption (+70% over-estimation). In conclusion, the ability to make use of an automated beam plotting system, and a hydrophone with good spatial resolution, greatly facilitates characterisation of the FPF, and consequently gives improved confidence in estimating spatial peak intensity from measurement of acoustic power.
NASA Astrophysics Data System (ADS)
Rodger, Craig J.; Mac Manus, Daniel H.; Dalzell, Michael; Thomson, Alan W. P.; Clarke, Ellen; Petersen, Tanja; Clilverd, Mark A.; Divett, Tim
2017-11-01
Geomagnetically induced current (GIC) observations made in New Zealand over 14 years show induction effects associated with a rapidly varying horizontal magnetic field (dBH/dt) during geomagnetic storms. This study analyzes the GIC observations in order to estimate the impact of extreme storms as a hazard to the power system in New Zealand. Analysis is undertaken of GIC in transformer number six in Islington, Christchurch (ISL M6), which had the highest observed currents during the 6 November 2001 storm. Using previously published values of 3,000 nT/min as a representation of an extreme storm with 100 year return period, induced currents of 455 A were estimated for Islington (with the 95% confidence interval range being 155-605 A). For 200 year return periods using 5,000 nT/min, current estimates reach 755 A (confidence interval range 155-910 A). GIC measurements from the much shorter data set collected at transformer number 4 in Halfway Bush, Dunedin, (HWB T4), found induced currents to be consistently a factor of 3 higher than at Islington, suggesting equivalent extreme storm effects of 460-1,815 A (100 year return) and 460-2,720 A (200 year return). An estimate was undertaken of likely failure levels for single-phase transformers, such as HWB T4 when it failed during the 6 November 2001 geomagnetic storm, identifying that induced currents of 100 A can put such transformer types at risk of damage. Detailed modeling of the New Zealand power system is therefore required to put this regional analysis into a global context.
Method and apparatus for current-output peak detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Geronimo, Gianluigi
2017-01-24
A method and apparatus for a current-output peak detector. A current-output peak detector circuit is disclosed and works in two phases. The peak detector circuit includes switches to switch the peak detector circuit from the first phase to the second phase upon detection of the peak voltage of an input voltage signal. The peak detector generates a current output with a high degree of accuracy in the second phase.
A wavelet-based Gaussian method for energy dispersive X-ray fluorescence spectrum.
Liu, Pan; Deng, Xiaoyan; Tang, Xin; Shen, Shijian
2017-05-01
This paper presents a wavelet-based Gaussian method (WGM) for the peak intensity estimation of energy dispersive X-ray fluorescence (EDXRF). The relationship between the parameters of Gaussian curve and the wavelet coefficients of Gaussian peak point is firstly established based on the Mexican hat wavelet. It is found that the Gaussian parameters can be accurately calculated by any two wavelet coefficients at the peak point which has to be known. This fact leads to a local Gaussian estimation method for spectral peaks, which estimates the Gaussian parameters based on the detail wavelet coefficients of Gaussian peak point. The proposed method is tested via simulated and measured spectra from an energy X-ray spectrometer, and compared with some existing methods. The results prove that the proposed method can directly estimate the peak intensity of EDXRF free from the background information, and also effectively distinguish overlap peaks in EDXRF spectrum.
Lightning Reporting at 45th Weather Squadron: Recent Improvements
NASA Technical Reports Server (NTRS)
Finn, Frank C.; Roeder, William P.; Buchanan, Michael D.; McNamara, Todd M.; McAllenan, Michael; Winters, Katherine A.; Fitzpatrick, Michael E.; Huddleston, Lisa L.
2010-01-01
The 45th Weather Squadron (45 WS) provides daily lightning reports to space launch customers at CCAFS/KSC. These reports are provided to assess the need to inspect the electronics of satellite payloads, space launch vehicles, and ground support equipment for induced current damage from nearby lightning strokes. The 45 WS has made several improvements to the lightning reports during 2008-2009. The 4DLSS, implemented in April 2008, provides all lightning strokes as opposed to just one stroke per flash as done by the previous system. The 45 WS discovered that the peak current was being truncated to the nearest kilo amp in the database used to generate the daily lightning reports, which led to an up to 4% underestimate in the peak current for average lightning. This error was corrected and led to elimination of this underestimate. The 45 WS and their mission partners developed lightning location error ellipses for 99% and 95% location accuracies tailored to each individual stroke and began providing them in the spring of 2009. The new procedure provides the distance from the point of interest to the best location of the stroke (the center of the error ellipse) and the distance to the closest edge of the ellipse. This information is now included in the lightning reports, along with the peak current of the stroke. The initial method of calculating the error ellipses could only be used during normal duty hours, i.e. not during nights, weekends, or holidays. This method was improved later to provide lightning reports in near real-time, 24/7. The calculation of the distance to the closest point on the ellipse was also significantly improved later. Other improvements were also implemented. A new method to calculate the probability of any nearby lightning stroke. being within any radius of any point of interest was developed and is being implemented. This may supersede the use of location error ellipses. The 45 WS is pursuing adding data from nine NLDN sensors into 4DLSS in real-time. This will overcome the problem of 4DLSS missing some of the strong local strokes. This will also improve the location accuracy, reduce the size and eccentricity of the location error ellipses, and reduce the probability of nearby strokes being inside the areas of interest when few of the 4DLSS sensors are used in the stroke solution. This will not reduce 4DLSS performance when most of the 4DLSS sensors are used in the stroke solution. Finally, several possible future improvements were discussed, especially for improving the peak current estimate and the error estimate for peak current, and upgrading the 4DLSS. Some possible approaches for both of these goals were discussed.
Trommer, J.T.; Loper, J.E.; Hammett, K.M.; Bowman, Georgia
1996-01-01
Hydrologists use several traditional techniques for estimating peak discharges and runoff volumes from ungaged watersheds. However, applying these techniques to watersheds in west-central Florida requires that empirical relationships be extrapolated beyond tested ranges. As a result there is some uncertainty as to their accuracy. Sixty-six storms in 15 west-central Florida watersheds were modeled using (1) the rational method, (2) the U.S. Geological Survey regional regression equations, (3) the Natural Resources Conservation Service (formerly the Soil Conservation Service) TR-20 model, (4) the Army Corps of Engineers HEC-1 model, and (5) the Environmental Protection Agency SWMM model. The watersheds ranged between fully developed urban and undeveloped natural watersheds. Peak discharges and runoff volumes were estimated using standard or recommended methods for determining input parameters. All model runs were uncalibrated and the selection of input parameters was not influenced by observed data. The rational method, only used to calculate peak discharges, overestimated 45 storms, underestimated 20 storms and estimated the same discharge for 1 storm. The mean estimation error for all storms indicates the method overestimates the peak discharges. Estimation errors were generally smaller in the urban watersheds and larger in the natural watersheds. The U.S. Geological Survey regression equations provide peak discharges for storms of specific recurrence intervals. Therefore, direct comparison with observed data was limited to sixteen observed storms that had precipitation equivalent to specific recurrence intervals. The mean estimation error for all storms indicates the method overestimates both peak discharges and runoff volumes. Estimation errors were smallest for the larger natural watersheds in Sarasota County, and largest for the small watersheds located in the eastern part of the study area. The Natural Resources Conservation Service TR-20 model, overestimated peak discharges for 45 storms and underestimated 21 storms, and overestimated runoff volumes for 44 storms and underestimated 22 storms. The mean estimation error for all storms modeled indicates that the model overestimates peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. The HEC-1 model overestimated peak discharge rates for 55 storms and underestimated 11 storms. Runoff volumes were overestimated for 44 storms and underestimated for 22 storms using the Army Corps of Engineers HEC-1 model. The mean estimation error for all the storms modeled indicates that the model overestimates peak discharge rates and runoff volumes. Generally, the smaller estimation errors in peak discharges were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. Estimation errors in runoff volumes; however, were smallest for the 3 natural watersheds located in the southernmost part of Sarasota County. The Environmental Protection Agency Storm Water Management model produced similar peak discharges and runoff volumes when using both the Green-Ampt and Horton infiltration methods. Estimated peak discharge and runoff volume data calculated with the Horton method was only slightly higher than those calculated with the Green-Ampt method. The mean estimation error for all the storms modeled indicates the model using the Green-Ampt infiltration method overestimates peak discharges and slightly underestimates runoff volumes. Using the Horton infiltration method, the model overestimates both peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the five natural watersheds in Sarasota County with the least amount of impervious cover and the lowest slopes. The largest er
Peak-flow frequency for tributaries of the Colorado River downstream of Austin, Texas
Asquith, William H.
1998-01-01
Peak-flow frequency for 38 stations with at least 8 years of data in natural (unregulated and nonurbanized) basins was estimated on the basis of annual peak-streamflow data through water year 1995. Peak-flow frequency represents the peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, 250, and 500 years. The peak-flow frequency and drainage basin characteristics for the stations were used to develop two sets of regression equations to estimate peak-flow frequency for tributaries of the Colorado River in the study area. One set of equations was developed for contributing drainage areas less than 32 square miles, and another set was developed for contributing drainage areas greater than 32 square miles. A procedure is presented to estimate the peak discharge at sites where both sets of equations are considered applicable. Additionally, procedures are presented to compute the 50-, 67-, and 90-percent prediction interval for any estimation from the equations.
Current responsive devices for synchronous generators
Karlicek, Robert F.
1983-01-01
A device for detecting current imbalance between phases of a polyphase alternating current generator. A detector responds to the maximum peak current in the generator, and detecting means generates an output for each phase proportional to the peak current of each phase. Comparing means generates an output when the maximum peak current exceeds the phase peak current.
Aerenhouts, Dirk
2015-01-01
A recommended field method to assess body composition in adolescent sprint athletes is currently lacking. Existing methods developed for non-athletic adolescents were not longitudinally validated and do not take maturation status into account. This longitudinal study compared two field methods, i.e., a Bio Impedance Analysis (BIA) and a skinfold based equation, with underwater densitometry to track body fat percentage relative to years from age at peak height velocity in adolescent sprint athletes. In this study, adolescent sprint athletes (34 girls, 35 boys) were measured every 6 months during 3 years (age at start = 14.8 ± 1.5yrs in girls and 14.7 ± 1.9yrs in boys). Body fat percentage was estimated in 3 different ways: 1) using BIA with the TANITA TBF 410; 2) using a skinfold based equation; 3) using underwater densitometry which was considered as the reference method. Height for age since birth was used to estimate age at peak height velocity. Cross-sectional analyses were performed using repeated measures ANOVA and Pearson correlations between measurement methods at each occasion. Data were analyzed longitudinally using a multilevel cross-classified model with the PROC Mixed procedure. In boys, compared to underwater densitometry, the skinfold based formula revealed comparable values for body fatness during the study period whereas BIA showed a different pattern leading to an overestimation of body fatness starting from 4 years after age at peak height velocity. In girls, both the skinfold based formula and BIA overestimated body fatness across the whole range of years from peak height velocity. The skinfold based method appears to give an acceptable estimation of body composition during growth as compared to underwater densitometry in male adolescent sprinters. In girls, caution is warranted when interpreting estimations of body fatness by both BIA and a skinfold based formula since both methods tend to give an overestimation. PMID:26317426
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilbanks, Matt C.; Yuter, S. E.; de Szoeke, S.
2015-09-01
Density currents (i.e. cold pools or outflows) beneath marine stratocumulus clouds are characterized using a 30-d data set of ship-based observations obtained during the 2008 Variability of American Monsoon Systems (VAMOS) Ocean-Cloud-Atmosphere-Land Study Regional Experiment (VOCALS-REx) in the southeast Pacific. An objective method identifies 71 density current fronts using an air density criterion and isolates each density current’s core (peak density) and tail (dissipating) zone. Compared to front and core zones, most density current tails exhibited weaker density gradients and wind anomalies elongated about the axis of the mean wind. The mean cloud-level advection relative to the surface layer windmore » (1.9 m s-1) nearly matches the mean density current propagation speed (1.8 m s-1). The similarity in speeds allows drizzle cells to deposit tails in their wakes. Based on high-resolution scanning Doppler lidar data, prefrontal updrafts had a mean intensity of 0.91 m s-1, reached an average altitude of 800 m, and were often surmounted by low-lying shelf clouds not connected to the overlying stratocumulus cloud. Nearly 90% of density currents were identified when C-band radar estimated 30-km diameter areal average rain rates exceeded 1 mm d-1. Rather than peaking when rain rates are highest overnight, density current occurrence peaks between 0600 and 0800 local solar time when enhanced local drizzle co-occurs with shallow subcloud dry and stable layers. The dry layers may contribute to density current formation by enhancing subcloud evaporation of drizzle. Density currents preferentially occur in regions of open cells but also occur in regions of closed cells.« less
Asquith, William H.; Slade, R.M.
1999-01-01
The U.S. Geological Survey, in cooperation with the Texas Department of Transportation, has developed a computer program to estimate peak-streamflow frequency for ungaged sites in natural basins in Texas. Peak-streamflow frequency refers to the peak streamflows for recurrence intervals of 2, 5, 10, 25, 50, and 100 years. Peak-streamflow frequency estimates are needed by planners, managers, and design engineers for flood-plain management; for objective assessment of flood risk; for cost-effective design of roads and bridges; and also for the desin of culverts, dams, levees, and other flood-control structures. The program estimates peak-streamflow frequency using a site-specific approach and a multivariate generalized least-squares linear regression. A site-specific approach differs from a traditional regional regression approach by developing unique equations to estimate peak-streamflow frequency specifically for the ungaged site. The stations included in the regression are selected using an informal cluster analysis that compares the basin characteristics of the ungaged site to the basin characteristics of all the stations in the data base. The program provides several choices for selecting the stations. Selecting the stations using cluster analysis ensures that the stations included in the regression will have the most pertinent information about flooding characteristics of the ungaged site and therefore provide the basis for potentially improved peak-streamflow frequency estimation. An evaluation of the site-specific approach in estimating peak-streamflow frequency for gaged sites indicates that the site-specific approach is at least as accurate as a traditional regional regression approach.
Jurio-Iriarte, Borja; Gorostegi-Anduaga, Ilargi; Aispuru, G Rodrigo; Pérez-Asenjo, Javier; Brubaker, Peter H; Maldonado-Martín, Sara
2017-04-01
The aims of the study were to evaluate the relationship between Modified Shuttle Walk Test (MSWT) with peak oxygen uptake (V˙O 2peak ) in overweight/obese people with primary hypertension (HTN) and to develop an equation for the MSWT to predict V˙O 2peak . Participants (N = 256, 53.9 ± 8.1 years old) with HTN and overweight/obesity performed a cardiorespiratory exercise test to peak exertion on an upright bicycle ergometer using an incremental ramp protocol and the 15-level MSWT. The formula of Singh et al was used as a template to predict V˙O 2peak , and a new equation was generated from the measured V˙O 2peak -MSWT relationship in this investigation. The correlation between measured and predicted V˙O 2peak for Singh et al equation was moderate (r = 0.60, P < .001) with a standard error of the estimate (SEE) of 4.92 mL·kg -1 minute -1 , SEE% = 21%. The correlation between MSWT and measured V˙O 2peak as well as for the new equation was strong (r = 0.72, P < .001) with a SEE of 4.35 mL·kg -1 minute -1 , SEE% = 19%. These results indicate that MSWT does not accurately predict functional capacity in overweight/obese people with HTN and questions the validity of using this test to evaluate exercise intolerance. A more accurate determination from a new equation in the current study incorporating more variables from MSWT to estimate V˙O 2peak has been performed but still results in substantial error. Copyright © 2017 American Society of Hypertension. Published by Elsevier Inc. All rights reserved.
Current responsive devices for synchronous generators
Karlicek, R.F.
1983-09-27
A device for detecting current imbalance between phases of a polyphase alternating current generator. A detector responds to the maximum peak current in the generator, and detecting means generates an output for each phase proportional to the peak current of each phase. Comparing means generates an output when the maximum peak current exceeds the phase peak current. 11 figs.
PICKY: a novel SVD-based NMR spectra peak picking method.
Alipanahi, Babak; Gao, Xin; Karakoc, Emre; Donaldson, Logan; Li, Ming
2009-06-15
Picking peaks from experimental NMR spectra is a key unsolved problem for automated NMR protein structure determination. Such a process is a prerequisite for resonance assignment, nuclear overhauser enhancement (NOE) distance restraint assignment, and structure calculation tasks. Manual or semi-automatic peak picking, which is currently the prominent way used in NMR labs, is tedious, time consuming and costly. We introduce new ideas, including noise-level estimation, component forming and sub-division, singular value decomposition (SVD)-based peak picking and peak pruning and refinement. PICKY is developed as an automated peak picking method. Different from the previous research on peak picking, we provide a systematic study of the proposed method. PICKY is tested on 32 real 2D and 3D spectra of eight target proteins, and achieves an average of 88% recall and 74% precision. PICKY is efficient. It takes PICKY on average 15.7 s to process an NMR spectrum. More important than these numbers, PICKY actually works in practice. We feed peak lists generated by PICKY to IPASS for resonance assignment, feed IPASS assignment to SPARTA for fragments generation, and feed SPARTA fragments to FALCON for structure calculation. This results in high-resolution structures of several proteins, for example, TM1112, at 1.25 A. PICKY is available upon request. The peak lists of PICKY can be easily loaded by SPARKY to enable a better interactive strategy for rapid peak picking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, De-Zheng; Wang, Wen-Chun; Zhang, Shuai
2013-05-13
Room temperature homogenous dielectric barrier discharge plasma with high instantaneous energy efficiency is acquired by using nanosecond pulse voltage with 20-200 ns tunable pulse width. Increasing the voltage pulse width can lead to the generation of regular and stable multiple current peaks in each discharge sequence. When the voltage pulse width is 200 ns, more than 5 organized current peaks can be observed under 26 kV peak voltage. Investigation also shows that the organized multiple current peaks only appear in homogenous discharge mode. When the discharge is filament mode, organized multiple current peaks are replaced by chaotic filament current peaks.
Sherwood, J.M.
1986-01-01
Methods are presented for estimating peak discharges, flood volumes and hydrograph shapes of small (less than 5 sq mi) urban streams in Ohio. Examples of how to use the various regression equations and estimating techniques also are presented. Multiple-regression equations were developed for estimating peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The significant independent variables affecting peak discharge are drainage area, main-channel slope, average basin-elevation index, and basin-development factor. Standard errors of regression and prediction for the peak discharge equations range from +/-37% to +/-41%. An equation also was developed to estimate the flood volume of a given peak discharge. Peak discharge, drainage area, main-channel slope, and basin-development factor were found to be the significant independent variables affecting flood volumes for given peak discharges. The standard error of regression for the volume equation is +/-52%. A technique is described for estimating the shape of a runoff hydrograph by applying a specific peak discharge and the estimated lagtime to a dimensionless hydrograph. An equation for estimating the lagtime of a basin was developed. Two variables--main-channel length divided by the square root of the main-channel slope and basin-development factor--have a significant effect on basin lagtime. The standard error of regression for the lagtime equation is +/-48%. The data base for the study was established by collecting rainfall-runoff data at 30 basins distributed throughout several metropolitan areas of Ohio. Five to eight years of data were collected at a 5-min record interval. The USGS rainfall-runoff model A634 was calibrated for each site. The calibrated models were used in conjunction with long-term rainfall records to generate a long-term streamflow record for each site. Each annual peak-discharge record was fitted to a Log-Pearson Type III frequency curve. Multiple-regression techniques were then used to analyze the peak discharge data as a function of the basin characteristics of the 30 sites. (Author 's abstract)
ESTIMATING RISK TO CALIFORNIA ENERGY INFRASTRUCTURE FROM PROJECTED CLIMATE CHANGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sathaye, Jayant; Dale, Larry; Larsen, Peter
2011-06-22
This report outlines the results of a study of the impact of climate change on the energy infrastructure of California and the San Francisco Bay region, including impacts on power plant generation; transmission line and substation capacity during heat spells; wildfires near transmission lines; sea level encroachment upon power plants, substations, and natural gas facilities; and peak electrical demand. Some end-of-century impacts were projected:Expected warming will decrease gas-fired generator efficiency. The maximum statewide coincident loss is projected at 10.3 gigawatts (with current power plant infrastructure and population), an increase of 6.2 percent over current temperature-induced losses. By the end ofmore » the century, electricity demand for almost all summer days is expected to exceed the current ninetieth percentile per-capita peak load. As much as 21 percent growth is expected in ninetieth percentile peak demand (per-capita, exclusive of population growth). When generator losses are included in the demand, the ninetieth percentile peaks may increase up to 25 percent. As the climate warms, California's peak supply capacity will need to grow faster than the population.Substation capacity is projected to decrease an average of 2.7 percent. A 5C (9F) air temperature increase (the average increase predicted for hot days in August) will diminish the capacity of a fully-loaded transmission line by an average of 7.5 percent.The potential exposure of transmission lines to wildfire is expected to increase with time. We have identified some lines whose probability of exposure to fire are expected to increase by as much as 40 percent. Up to 25 coastal power plants and 86 substations are at risk of flooding (or partial flooding) due to sea level rise.« less
Surface electric fields for North America during historical geomagnetic storms
Wei, Lisa H.; Homeier, Nichole; Gannon, Jennifer L.
2013-01-01
To better understand the impact of geomagnetic disturbances on the electric grid, we recreate surface electric fields from two historical geomagnetic storms—the 1989 “Quebec” storm and the 2003 “Halloween” storms. Using the Spherical Elementary Current Systems method, we interpolate sparsely distributed magnetometer data across North America. We find good agreement between the measured and interpolated data, with larger RMS deviations at higher latitudes corresponding to larger magnetic field variations. The interpolated magnetic field data are combined with surface impedances for 25 unique physiographic regions from the United States Geological Survey and literature to estimate the horizontal, orthogonal surface electric fields in 1 min time steps. The induced horizontal electric field strongly depends on the local surface impedance, resulting in surprisingly strong electric field amplitudes along the Atlantic and Gulf Coast. The relative peak electric field amplitude of each physiographic region, normalized to the value in the Interior Plains region, varies by a factor of 2 for different input magnetic field time series. The order of peak electric field amplitudes (largest to smallest), however, does not depend much on the input. These results suggest that regions at lower magnetic latitudes with high ground resistivities are also at risk from the effect of geomagnetically induced currents. The historical electric field time series are useful for estimating the flow of the induced currents through long transmission lines to study power flow and grid stability during geomagnetic disturbances.
Berenbrock, Charles
2003-01-01
Improved flood-frequency estimates for short-term (10 or fewer years of record) streamflow-gaging stations were needed to support instream flow studies by the U.S. Forest Service, which are focused on quantifying water rights necessary to maintain or restore productive fish habitat. Because peak-flow data for short-term gaging stations can be biased by having been collected during an unusually wet, dry, or otherwise unrepresentative period of record, the data may not represent the full range of potential floods at a site. To test whether peak-flow estimates for short-term gaging stations could be improved, the two-station comparison method was used to adjust the logarithmic mean and logarithmic standard deviation of peak flows for seven short-term gaging stations in the Salmon and Clearwater River Basins, central Idaho. Correlation coefficients determined from regression of peak flows for paired short-term and long-term (more than 10 years of record) gaging stations over a concurrent period of record indicated that the mean and standard deviation of peak flows for all short-term gaging stations would be improved. Flood-frequency estimates for seven short-term gaging stations were determined using the adjusted mean and standard deviation. The original (unadjusted) flood-frequency estimates for three of the seven short-term gaging stations differed from the adjusted estimates by less than 10 percent, probably because the data were collected during periods representing the full range of peak flows. Unadjusted flood-frequency estimates for four short-term gaging stations differed from the adjusted estimates by more than 10 percent; unadjusted estimates for Little Slate Creek and Salmon River near Obsidian differed from adjusted estimates by nearly 30 percent. These large differences probably are attributable to unrepresentative periods of peak-flow data collection.
Stinnett, Jacob; Sullivan, Clair J.; Xiong, Hao
2017-03-02
Low-resolution isotope identifiers are widely deployed for nuclear security purposes, but these detectors currently demonstrate problems in making correct identifications in many typical usage scenarios. While there are many hardware alternatives and improvements that can be made, performance on existing low resolution isotope identifiers should be able to be improved by developing new identification algorithms. We have developed a wavelet-based peak extraction algorithm and an implementation of a Bayesian classifier for automated peak-based identification. The peak extraction algorithm has been extended to compute uncertainties in the peak area calculations. To build empirical joint probability distributions of the peak areas andmore » uncertainties, a large set of spectra were simulated in MCNP6 and processed with the wavelet-based feature extraction algorithm. Kernel density estimation was then used to create a new component of the likelihood function in the Bayesian classifier. Furthermore, identification performance is demonstrated on a variety of real low-resolution spectra, including Category I quantities of special nuclear material.« less
Haydon, D A; Urban, B W
1983-01-01
The effects of several n-alkanols and n-alkyl oxyethylene alcohols, methyl octanoate, glycerol 1-monooctanoate and dioctanoyl phosphatidylcholine on the ionic currents and electrical capacity of the squid giant axon membrane have been examined. The peak inward current in voltage-clamped axons was reduced reversibly by each substance. For n-pentanol to n-decanol the concentrations required to suppress the peak inward current by 50% were determined. From these data, it was estimated that the standard free energy per CH2 for adsorption to the site of action was -3.04 kJ mole-1, as compared with -3.11 kJ mole-1 for adsorption into phospholipid bilayers or an n-alkane/aqueous solution interface. The membrane capacity at 100 kHz was not greatly by any of the test substances at concentrations which reduced the inward current by 50%. Na currents under voltage clamp were recorded in intracellularly perfused axons before, during and sometimes after exposure to the test substances and the records were fitted with equations similar to those proposed by Hodgkin & Huxley (1952). Shifts in the curves of the steady-state activation and inactivation parameters (m infinity and h infinity) against membrane potential, changes in the peak heights of the activation and inactivation time constants (tau m and tau h) and reductions in the maximum Na conductance (gNa) have been tabulated. All of the test substances shifted the voltage dependence of the steady-state activation in the depolarizing direction and lowered the peak time constants for both activation and inactivation. The origins of these effects, and of the differences in the present results from those of the hydrocarbons (Haydon & Urban, 1983), have been discussed in terms of the physico-chemical properties of the two groups of substances and with reference to their effects on artificial membranes. PMID:6312030
Estimation of peak-discharge frequency of urban streams in Jefferson County, Kentucky
Martin, Gary R.; Ruhl, Kevin J.; Moore, Brian L.; Rose, Martin F.
1997-01-01
An investigation of flood-hydrograph characteristics for streams in urban Jefferson County, Kentucky, was made to obtain hydrologic information needed for waterresources management. Equations for estimating peak-discharge frequencies for ungaged streams in the county were developed by combining (1) long-term annual peakdischarge data and rainfall-runoff data collected from 1991 to 1995 in 13 urban basins and (2) long-term annual peak-discharge data in four rural basins located in hydrologically similar areas of neighboring counties. The basins ranged in size from 1.36 to 64.0 square miles. The U.S. Geological Survey Rainfall- Runoff Model (RRM) was calibrated for each of the urban basins. The calibrated models were used with long-term, historical rainfall and pan-evaporation data to simulate 79 years of annual peak-discharge data. Peak-discharge frequencies were estimated by fitting the logarithms of the annual peak discharges to a Pearson-Type III frequency distribution. The simulated peak-discharge frequencies were adjusted for improved reliability by application of bias-correction factors derived from peakdischarge frequencies based on local, observed annual peak discharges. The three-parameter and the preferred seven-parameter nationwide urban-peak-discharge regression equations previously developed by USGS investigators provided biased (high) estimates for the urban basins studied. Generalized-least-square regression procedures were used to relate peakdischarge frequency to selected basin characteristics. Regression equations were developed to estimate peak-discharge frequency by adjusting peak-dischargefrequency estimates made by use of the threeparameter nationwide urban regression equations. The regression equations are presented in equivalent forms as functions of contributing drainage area, main-channel slope, and basin development factor, which is an index for measuring the efficiency of the basin drainage system. Estimates of peak discharges for streams in the county can be made for the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals by use of the regression equations. The average standard errors of prediction of the regression equations ranges from ? 34 to ? 45 percent. The regression equations are applicable to ungaged streams in the county having a specific range of basin characteristics.
Algal cell disruption using microbubbles to localize ultrasonic energy
Krehbiel, Joel D.; Schideman, Lance C.; King, Daniel A.; Freund, Jonathan B.
2015-01-01
Microbubbles were added to an algal solution with the goal of improving cell disruption efficiency and the net energy balance for algal biofuel production. Experimental results showed that disruption increases with increasing peak rarefaction ultrasound pressure over the range studied: 1.90 to 3.07 MPa. Additionally, ultrasound cell disruption increased by up to 58% by adding microbubbles, with peak disruption occurring in the range of 108 microbubbles/ml. The localization of energy in space and time provided by the bubbles improve efficiency: energy requirements for such a process were estimated to be one-fourth of the available heat of combustion of algal biomass and one-fifth of currently used cell disruption methods. This increase in energy efficiency could make microbubble enhanced ultrasound viable for bioenergy applications and is expected to integrate well with current cell harvesting methods based upon dissolved air flotation. PMID:25311188
NASA Astrophysics Data System (ADS)
Singh, Nirupama; Kumar, Pushpendra; Upadhyay, Sumant; Choudhary, Surbhi; Satsangi, Vibha R.; Dass, Sahab; Shrivastav, Rohit
2013-06-01
In the present study Readymade Graphene oxide (GO) has been coated using electrochemical deposition technique [1] on to the conducting glass (ITO) substrate. Raman spectra generated D and G Peaks obtained at 1346 and 1575 cm-1 confirmed the presence of GO [2]. The UV-Visible absorption measurements provided absorption peak at 262 nm and the Tauc plots yielded band-gap energy of sample around 3.9 eV. The PEC measurements involved determination of current-voltage (I-V) characteristics, both under darkness as well as under illumination. The photocurrent of 1.21 mA/cm-2 at 0.5 V applied voltage (vs. saturated calomel electrode), was recorded under the illumination of 150 Wcm-2 (Xenon arc lamp; Oriel, USA). The photocurrent values were utilized further to calculate applied bias photon-to-current efficiency (% ABPE), which was estimated to 0.98 % at 0.5 V bias.
Battery Capacity Fading Estimation Using a Force-Based Incremental Capacity Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.
Traditionally health monitoring techniques in lithium-ion batteries rely on voltage and current measurements. A novel method of using a mechanical rather than electrical signal in the incremental capacity analysis (ICA) method is introduced in this paper. This method derives the incremental capacity curves based onmeasured force (ICF) instead of voltage (ICV). The force ismeasured on the surface of a cell under compression in a fixture that replicates a battery pack assembly and preloading. The analysis is performed on data collected from cycling encased prismatic Lithium-ion Nickel-Manganese-Cobalt Oxide (NMC) cells. For the NMC chemistry, the ICF method can complement or replacemore » the ICV method for the following reasons. The identified ICV peaks are centered around 40% of state of charge (SOC) while the peaks of the ICF method are centered around 70% of SOC indicating that the ICF can be used more often because it is more likely that an electric vehicle (EV) or a plug-in hybrid electric vehicle (PHEV) will traverse the 70% SOC range than the 40% SOC. In addition the Signal to Noise ratio (SNR) of the force signal is four times larger than the voltage signal using laboratory grade sensors. The proposed ICF method is shown to achieve 0.42% accuracy in capacity estimation during a low C-rate constant current discharge. Future work will investigate the application of the capacity estimation technique under charging and operation under high C-rates by addressing the transient behavior of force so that an online methodology for capacity estimation is developed.« less
Battery Capacity Fading Estimation Using a Force-Based Incremental Capacity Analysis
Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.; ...
2016-05-27
Traditionally health monitoring techniques in lithium-ion batteries rely on voltage and current measurements. A novel method of using a mechanical rather than electrical signal in the incremental capacity analysis (ICA) method is introduced in this paper. This method derives the incremental capacity curves based onmeasured force (ICF) instead of voltage (ICV). The force ismeasured on the surface of a cell under compression in a fixture that replicates a battery pack assembly and preloading. The analysis is performed on data collected from cycling encased prismatic Lithium-ion Nickel-Manganese-Cobalt Oxide (NMC) cells. For the NMC chemistry, the ICF method can complement or replacemore » the ICV method for the following reasons. The identified ICV peaks are centered around 40% of state of charge (SOC) while the peaks of the ICF method are centered around 70% of SOC indicating that the ICF can be used more often because it is more likely that an electric vehicle (EV) or a plug-in hybrid electric vehicle (PHEV) will traverse the 70% SOC range than the 40% SOC. In addition the Signal to Noise ratio (SNR) of the force signal is four times larger than the voltage signal using laboratory grade sensors. The proposed ICF method is shown to achieve 0.42% accuracy in capacity estimation during a low C-rate constant current discharge. Future work will investigate the application of the capacity estimation technique under charging and operation under high C-rates by addressing the transient behavior of force so that an online methodology for capacity estimation is developed.« less
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Optional NOX Emissions Estimation Protocol for Gas-Fired Peaking Units and Oil-Fired Peaking Units E Appendix E to Part 75 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTINUOUS EMISSION...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Optional NOX Emissions Estimation Protocol for Gas-Fired Peaking Units and Oil-Fired Peaking Units E Appendix E to Part 75 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTINUOUS EMISSION...
Slade, R.M.; Asquith, W.H.
1996-01-01
About 23,000 annual peak streamflows and about 400 historical peak streamflows exist for about 950 stations in the surface-water data-collection network of Texas. These data are presented on a computer diskette along with the corresponding dates, gage heights, and information concerning the basin, and nature or cause for the flood. Also on the computer diskette is a U.S. Geological Survey computer program that estimates peak-streamflow frequency based on annual and historical peak streamflow. The program estimates peak streamflow for 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals and is based on guidelines established by the Interagency Advisory Committee on Water Data. Explanations are presented for installing the program, and an example is presented with discussion of its options.
2011-01-01
Background While many pandemic preparedness plans have promoted disease control effort to lower and delay an epidemic peak, analytical methods for determining the required control effort and making statistical inferences have yet to be sought. As a first step to address this issue, we present a theoretical basis on which to assess the impact of an early intervention on the epidemic peak, employing a simple epidemic model. Methods We focus on estimating the impact of an early control effort (e.g. unsuccessful containment), assuming that the transmission rate abruptly increases when control is discontinued. We provide analytical expressions for magnitude and time of the epidemic peak, employing approximate logistic and logarithmic-form solutions for the latter. Empirical influenza data (H1N1-2009) in Japan are analyzed to estimate the effect of the summer holiday period in lowering and delaying the peak in 2009. Results Our model estimates that the epidemic peak of the 2009 pandemic was delayed for 21 days due to summer holiday. Decline in peak appears to be a nonlinear function of control-associated reduction in the reproduction number. Peak delay is shown to critically depend on the fraction of initially immune individuals. Conclusions The proposed modeling approaches offer methodological avenues to assess empirical data and to objectively estimate required control effort to lower and delay an epidemic peak. Analytical findings support a critical need to conduct population-wide serological survey as a prior requirement for estimating the time of peak. PMID:21269441
PICKY: a novel SVD-based NMR spectra peak picking method
Alipanahi, Babak; Gao, Xin; Karakoc, Emre; Donaldson, Logan; Li, Ming
2009-01-01
Motivation: Picking peaks from experimental NMR spectra is a key unsolved problem for automated NMR protein structure determination. Such a process is a prerequisite for resonance assignment, nuclear overhauser enhancement (NOE) distance restraint assignment, and structure calculation tasks. Manual or semi-automatic peak picking, which is currently the prominent way used in NMR labs, is tedious, time consuming and costly. Results: We introduce new ideas, including noise-level estimation, component forming and sub-division, singular value decomposition (SVD)-based peak picking and peak pruning and refinement. PICKY is developed as an automated peak picking method. Different from the previous research on peak picking, we provide a systematic study of the proposed method. PICKY is tested on 32 real 2D and 3D spectra of eight target proteins, and achieves an average of 88% recall and 74% precision. PICKY is efficient. It takes PICKY on average 15.7 s to process an NMR spectrum. More important than these numbers, PICKY actually works in practice. We feed peak lists generated by PICKY to IPASS for resonance assignment, feed IPASS assignment to SPARTA for fragments generation, and feed SPARTA fragments to FALCON for structure calculation. This results in high-resolution structures of several proteins, for example, TM1112, at 1.25 Å. Availability: PICKY is available upon request. The peak lists of PICKY can be easily loaded by SPARKY to enable a better interactive strategy for rapid peak picking. Contact: mli@uwaterloo.ca PMID:19477998
Sugiura, Yoshito; Hatanaka, Yasuhiko; Arai, Tomoaki; Sakurai, Hiroaki; Kanada, Yoshikiyo
2016-04-01
We aimed to investigate whether a linear regression formula based on the relationship between joint torque and angular velocity measured using a high-speed video camera and image measurement software is effective for estimating 1 repetition maximum (1RM) and isometric peak torque in knee extension. Subjects comprised 20 healthy men (mean ± SD; age, 27.4 ± 4.9 years; height, 170.3 ± 4.4 cm; and body weight, 66.1 ± 10.9 kg). The exercise load ranged from 40% to 150% 1RM. Peak angular velocity (PAV) and peak torque were used to estimate 1RM and isometric peak torque. To elucidate the relationship between force and velocity in knee extension, the relationship between the relative proportion of 1RM (% 1RM) and PAV was examined using simple regression analysis. The concordance rate between the estimated value and actual measurement of 1RM and isometric peak torque was examined using intraclass correlation coefficients (ICCs). Reliability of the regression line of PAV and % 1RM was 0.95. The concordance rate between the actual measurement and estimated value of 1RM resulted in an ICC(2,1) of 0.93 and that of isometric peak torque had an ICC(2,1) of 0.87 and 0.86 for 6 and 3 levels of load, respectively. Our method for estimating 1RM was effective for decreasing the measurement time and reducing patients' burden. Additionally, isometric peak torque can be estimated using 3 levels of load, as we obtained the same results as those reported previously. We plan to expand the range of subjects and examine the generalizability of our results.
Vozoris, N T; O'donnell, D E
2015-01-01
Whether reduced activity level and exercise intolerance precede the clinical diagnosis of cardiopulmonary disorders in smokers is not known. We examined activity level and exercise test outcomes in a young population-based sample without overt cardiopulmonary disease, differentiating by smoking history. This was a multiyear cross-sectional study using United States National Health and Nutrition Examination Survey data from 1999-2004. Self-reported activity level and incremental exercise treadmill testing were obtained on survey participants ages 20-49 years, excluding individuals with cardio-pulmonary disease. Three thousand seven hundred and one individuals completed exercise testing. Compared to never smokers, current smokers with >10 pack years reported significantly higher odds of little or no recreation, sport, or physical activity (adjusted OR 1.62; 95% CI 1.12-2.35). Mean perceived exertion ratings (Borg 6-20) at an estimated standardized workload were significantly greater among current smokers (18.3-18.6) compared to never (17.3) and former smokers (17.9) (p<0.05). There were no significant differences in the proportions of individuals across estimated peak oxygen uptake categories among the groups after adjusting for age and sex. Among former smokers, increasing duration of smoking abstinence was associated with significantly lower likelihood of low estimated peak oxygen uptake categorization (p<0.05). Among young individuals without overt cardiopulmonary disease, current smokers had reduced daily activity and higher perceived exertion ratings. Besides supporting early smoking cessation, these results set the stage for future studies that examine mechanisms of activity restriction in young smokers and the utility of measures of activity restriction in the earlier diagnosis of smoking-related diseases.
Burden of Type 2 Diabetes in Mexico: Past, Current and Future Prevalence and Incidence Rates
Meza, Rafael; Barrientos-Gutierrez, Tonatiuh; Rojas-Martinez, Rosalba; Reynoso-Noverón, Nancy; Palacio-Mejia, Lina Sofia; Lazcano-Ponce, Eduardo; Hernández-Ávila, Mauricio
2015-01-01
Introduction Mexico diabetes prevalence has increased dramatically in recent years. However, no national incidence estimates exist, hampering the assessment of diabetes trends and precluding the development of burden of disease analyses to inform public health policy decision-making. Here we provide evidence regarding current magnitude of diabetes in Mexico and its future trends. Methods We used data from the Mexico National Health and Nutrition Survey, and age-period-cohort models to estimate prevalence and incidence of self-reported diagnosed diabetes by age, sex, calendar-year (1960–2012), and birth-cohort (1920–1980). We project future rates under three alternative incidence scenarios using demographic projections of the Mexican population from 2010–2050 and a Multi-cohort Diabetes Markov Model. Results Adult (ages 20+) diagnosed diabetes prevalence in Mexico increased from 7% to 8.9% from 2006 to 2012. Diabetes prevalence increases with age, peaking around ages 65–68 to then decrease. Age-specific incidence follows similar patterns, but peaks around ages 57–59. We estimate that diagnosed diabetes incidence increased exponentially during 1960–2012, roughly doubling every 10 years. Projected rates under three age-specific incidence scenarios suggest diabetes prevalence among adults (ages 20+) may reach 13.7–22.5% by 2050, affecting 15–25 million individuals, with a lifetime risk of 1 in 3 to 1 in 2. Conclusions Diabetes prevalence in Mexico will continue to increase even if current incidence rates remain unchanged. Continued implementation of policies to reduce obesity rates, increase physical activity, and improve population diet, in tandem with diabetes surveillance and other risk control measures is paramount to substantially reduce the burden of diabetes in Mexico. PMID:26546108
Burden of type 2 diabetes in Mexico: past, current and future prevalence and incidence rates.
Meza, Rafael; Barrientos-Gutierrez, Tonatiuh; Rojas-Martinez, Rosalba; Reynoso-Noverón, Nancy; Palacio-Mejia, Lina Sofia; Lazcano-Ponce, Eduardo; Hernández-Ávila, Mauricio
2015-12-01
Mexico diabetes prevalence has increased dramatically in recent years. However, no national incidence estimates exist, hampering the assessment of diabetes trends and precluding the development of burden of disease analyses to inform public health policy decision-making. Here we provide evidence regarding current magnitude of diabetes in Mexico and its future trends. We used data from the Mexico National Health and Nutrition Survey, and age-period-cohort models to estimate prevalence and incidence of self-reported diagnosed diabetes by age, sex, calendar-year (1960-2012), and birth-cohort (1920-1980). We project future rates under three alternative incidence scenarios using demographic projections of the Mexican population from 2010-2050 and a Multi-cohort Diabetes Markov Model. Adult (ages 20+) diagnosed diabetes prevalence in Mexico increased from 7% to 8.9% from 2006 to 2012. Diabetes prevalence increases with age, peaking around ages 65-68 to then decrease. Age-specific incidence follows similar patterns, but peaks around ages 57-59. We estimate that diagnosed diabetes incidence increased exponentially during 1960-2012, roughly doubling every 10 years. Projected rates under three age-specific incidence scenarios suggest diabetes prevalence among adults (ages 20+) may reach 13.7-22.5% by 2050, affecting 15-25 million individuals, with a lifetime risk of 1 in 3 to 1 in 2. Diabetes prevalence in Mexico will continue to increase even if current incidence rates remain unchanged. Continued implementation of policies to reduce obesity rates, increase physical activity, and improve population diet, in tandem with diabetes surveillance and other risk control measures is paramount to substantially reduce the burden of diabetes in Mexico. Copyright © 2015 Elsevier Inc. All rights reserved.
North–south polarization of European electricity consumption under future warming
Wenz, Leonie; Levermann, Anders; Auffhammer, Maximilian
2017-01-01
There is growing empirical evidence that anthropogenic climate change will substantially affect the electric sector. Impacts will stem both from the supply side—through the mitigation of greenhouse gases—and from the demand side—through adaptive responses to a changing environment. Here we provide evidence of a polarization of both peak load and overall electricity consumption under future warming for the world’s third-largest electricity market—the 35 countries of Europe. We statistically estimate country-level dose–response functions between daily peak/total electricity load and ambient temperature for the period 2006–2012. After removing the impact of nontemperature confounders and normalizing the residual load data for each country, we estimate a common dose–response function, which we use to compute national electricity loads for temperatures that lie outside each country’s currently observed temperature range. To this end, we impose end-of-century climate on today’s European economies following three different greenhouse-gas concentration trajectories, ranging from ambitious climate-change mitigation—in line with the Paris agreement—to unabated climate change. We find significant increases in average daily peak load and overall electricity consumption in southern and western Europe (∼3 to ∼7% for Portugal and Spain) and significant decreases in northern Europe (∼−6 to ∼−2% for Sweden and Norway). While the projected effect on European total consumption is nearly zero, the significant polarization and seasonal shifts in peak demand and consumption have important ramifications for the location of costly peak-generating capacity, transmission infrastructure, and the design of energy-efficiency policy and storage capacity. PMID:28847939
North-south polarization of European electricity consumption under future warming.
Wenz, Leonie; Levermann, Anders; Auffhammer, Maximilian
2017-09-19
There is growing empirical evidence that anthropogenic climate change will substantially affect the electric sector. Impacts will stem both from the supply side-through the mitigation of greenhouse gases-and from the demand side-through adaptive responses to a changing environment. Here we provide evidence of a polarization of both peak load and overall electricity consumption under future warming for the world's third-largest electricity market-the 35 countries of Europe. We statistically estimate country-level dose-response functions between daily peak/total electricity load and ambient temperature for the period 2006-2012. After removing the impact of nontemperature confounders and normalizing the residual load data for each country, we estimate a common dose-response function, which we use to compute national electricity loads for temperatures that lie outside each country's currently observed temperature range. To this end, we impose end-of-century climate on today's European economies following three different greenhouse-gas concentration trajectories, ranging from ambitious climate-change mitigation-in line with the Paris agreement-to unabated climate change. We find significant increases in average daily peak load and overall electricity consumption in southern and western Europe (∼3 to ∼7% for Portugal and Spain) and significant decreases in northern Europe (∼-6 to ∼-2% for Sweden and Norway). While the projected effect on European total consumption is nearly zero, the significant polarization and seasonal shifts in peak demand and consumption have important ramifications for the location of costly peak-generating capacity, transmission infrastructure, and the design of energy-efficiency policy and storage capacity.
A unified dynamic neural field model of goal directed eye movements
NASA Astrophysics Data System (ADS)
Quinton, J. C.; Goffart, L.
2018-01-01
Primates heavily rely on their visual system, which exploits signals of graded precision based on the eccentricity of the target in the visual field. The interactions with the environment involve actively selecting and focusing on visual targets or regions of interest, instead of contemplating an omnidirectional visual flow. Eye-movements specifically allow foveating targets and track their motion. Once a target is brought within the central visual field, eye-movements are usually classified into catch-up saccades (jumping from one orientation or fixation to another) and smooth pursuit (continuously tracking a target with low velocity). Building on existing dynamic neural field equations, we introduce a novel model that incorporates internal projections to better estimate the current target location (associated to a peak of activity). Such estimate is then used to trigger an eye movement, leading to qualitatively different behaviours depending on the dynamics of the whole oculomotor system: (1) fixational eye-movements due to small variations in the weights of projections when the target is stationary, (2) interceptive and catch-up saccades when peaks build and relax on the neural field, (3) smooth pursuit when the peak stabilises near the centre of the field, the system reaching a fixed point attractor. Learning is nevertheless required for tracking a rapidly moving target, and the proposed model thus replicates recent results in the monkey, in which repeated exercise permits the maintenance of the target within in the central visual field at its current (here-and-now) location, despite the delays involved in transmitting retinal signals to the oculomotor neurons.
Variation in light intensity with height and time from subsequent lightning return strokes
NASA Technical Reports Server (NTRS)
Jordan, D. M.; Uman, M. A.
1983-01-01
Photographic measurements of relative light intensity as a function of height and time have been conducted for seven return strokes in two lightning flashes at 7.8 and 8.7 km ranges, using film which possesses an approximately constant spectral response in the 300-670 nm range. The amplitude of the initial light peak is noted to decrease exponentially with height, with a decay constant of 0.6-0.8 km. The logarithm of the peak light intensity near the ground is found to be approximately proportional to the initial peak electric field intensity, implying that the current decrease with height may be much slower than the light decrease. Absolute light intensity is presently estimated through the integration of the photographic signals from individual channel segments, in order to simulate the calibrated, all-sky photoelectric data of Guo and Krider (1982).
Frequency-Modulated, Continuous-Wave Laser Ranging Using Photon-Counting Detectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Barber, Zeb W.; Dahl, Jason
2014-01-01
Optical ranging is a problem of estimating the round-trip flight time of a phase- or amplitude-modulated optical beam that reflects off of a target. Frequency- modulated, continuous-wave (FMCW) ranging systems obtain this estimate by performing an interferometric measurement between a local frequency- modulated laser beam and a delayed copy returning from the target. The range estimate is formed by mixing the target-return field with the local reference field on a beamsplitter and detecting the resultant beat modulation. In conventional FMCW ranging, the source modulation is linear in instantaneous frequency, the reference-arm field has many more photons than the target-return field, and the time-of-flight estimate is generated by balanced difference- detection of the beamsplitter output, followed by a frequency-domain peak search. This work focused on determining the maximum-likelihood (ML) estimation algorithm when continuous-time photoncounting detectors are used. It is founded on a rigorous statistical characterization of the (random) photoelectron emission times as a function of the incident optical field, including the deleterious effects caused by dark current and dead time. These statistics enable derivation of the Cramér-Rao lower bound (CRB) on the accuracy of FMCW ranging, and derivation of the ML estimator, whose performance approaches this bound at high photon flux. The estimation algorithm was developed, and its optimality properties were shown in simulation. Experimental data show that it performs better than the conventional estimation algorithms used. The demonstrated improvement is a factor of 1.414 over frequency-domainbased estimation. If the target interrogating photons and the local reference field photons are costed equally, the optimal allocation of photons between these two arms is to have them equally distributed. This is different than the state of the art, in which the local field is stronger than the target return. The optimal processing of the photocurrent processes at the outputs of the two detectors is to perform log-matched filtering followed by a summation and peak detection. This implies that neither difference detection, nor Fourier-domain peak detection, which are the staples of the state-of-the-art systems, is optimal when a weak local oscillator is employed.
Comprehensive seismic monitoring of the Cascadia megathrust with real-time GPS
NASA Astrophysics Data System (ADS)
Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C. W.; Webb, F.
2013-12-01
We have developed a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone based on 1- and 5-second point position estimates computed within the ITRF08 reference frame. A Kalman filter stream editor that uses a geometry-free combination of phase and range observables to speed convergence while also producing independent estimation of carrier phase biases and ionosphere delay pre-cleans raw satellite measurements. These are then analyzed with GIPSY-OASIS using satellite clock and orbit corrections streamed continuously from the International GNSS Service (IGS) and the German Aerospace Center (DLR). The resulting RMS position scatter is less than 3 cm, and typical latencies are under 2 seconds. Currently 31 coastal Washington, Oregon, and northern California stations from the combined PANGA and PBO networks are analyzed. We are now ramping up to include all of the remaining 400+ stations currently operating throughout the Cascadia subduction zone, all of which are high-rate and telemetered in real-time to CWU. These receivers span the M9 megathrust, M7 crustal faults beneath population centers, several active Cascades volcanoes, and a host of other hazard sources. To use the point position streams for seismic monitoring, we have developed an inter-process client communication package that captures, buffers and re-broadcasts real-time positions and covariances to a variety of seismic estimation routines running on distributed hardware. An aggregator ingests, re-streams and can rebroadcast up to 24 hours of point-positions and resultant seismic estimates derived from the point positions to application clients distributed across web. A suite of seismic monitoring applications has also been written, which includes position time series analysis, instantaneous displacement vectors, and peak ground displacement contouring and mapping. We have also implemented a continuous estimation of finite-fault slip along the Cascadia megathrust using a NIF-type approach. This currently operates on the terrestrial GPS data streams, but could readily be expanded to use real-time offshore geodetic measurements as well. The continuous slip distributions are used in turn to compute tsunami excitation and, when convolved with pre-computed, hydrodynamic Green functions calculated using the COMCOT tsunami modeling software, run-up estimates for the entire Cascadia coastal margin. Finally, a suite of data visualization tools has been written to allow interaction with the real-time position streams and seismic estimates based on them, including time series plotting, instantaneous offset vectors, peak ground deformation contouring, finite-fault inversions, and tsunami run-up. This suite is currently bundled within a single client written in JAVA, called ';GPS Cockpit,' which is available for download.
An In-Rush Current Suppression Technique for the Solid-State Transfer Switch System
NASA Astrophysics Data System (ADS)
Cheng, Po-Tai; Chen, Yu-Hsing
More and more utility companies provide dual power feeders as a premier service of high power quality and reliability. To take advantage of this, the solid-state transfer switch (STS) is adopted to protect the sensitive load against the voltage sag. However, the fast transfer process may cause in-rush current on the load-side transformer due to the resulting DC-offset in its magnetic flux as the load-transfer is completed. The in-rush current can reach 2∼6 p.u. and it may trigger the over-current protections on the power feeder. This paper develops a flux estimation scheme and a thyristor gating scheme based on the impulse commutation bridge STS (ICBSTS) to minimize the DC-offset on the magnetic flux. By sensing the line voltages of both feeders, the flux estimator can predict the peak transient flux linkage at the moment of load-transfer and evaluate a suitable moment for the transfer to minimize the in-rush current. Laboratory test results are presented to validate the performance of the proposed system.
Testing the Auroral Current-Voltage Relation in Multiple Arcs
NASA Astrophysics Data System (ADS)
Cameron, T. G.; Knudsen, D. J.; Cully, C. M.
2013-12-01
The well-known current-voltage relation within auroral inverted-V regions [Knight, Planet. Space Sci., 21, 741, 1973] predicts current carried by an auroral flux tube given the total potential drop between a plasma-sheet source region and the ionosphere. Numerous previous studies have tested this relation using spacecraft that traverse auroral arcs at low (ionospheric) or mid altitudes. Typically, the potential drop is estimated at the peak of the inverted-V, and field-aligned current is estimated from magnetometer data; statistical information is then gathered over many arc crossings that occur over a wide range of source conditions. In this study we use electron data from the FAST satellite to examine the current-voltage relation in multiple arc sets, in which the key source parameters (plasma sheet density and temperature) are presumed to be identical. We argue that this approach provides a more sensitive test of the Knight relation, and we seek to explain remaining variability with factors other than source variability. This study is supported by a grant from the Natural Sciences and Engineering Research Council of Canada.
Estimated Prestroke Peak VO2 Is Related to Circulating IGF-1 Levels During Acute Stroke.
Mattlage, Anna E; Rippee, Michael A; Abraham, Michael G; Sandt, Janice; Billinger, Sandra A
2017-01-01
Background Insulin-like growth factor-1 (IGF-1) is neuroprotective after stroke and is regulated by insulin-like binding protein-3 (IGFBP-3). In healthy individuals, exercise and improved aerobic fitness (peak oxygen uptake; peak VO 2 ) increases IGF-1 in circulation. Understanding the relationship between estimated prestroke aerobic fitness and IGF-1 and IGFBP-3 after stroke may provide insight into the benefits of exercise and aerobic fitness on stroke recovery. Objective The purpose of this study was to determine the relationship of IGF-1 and IGFBP-3 to estimated prestroke peak VO 2 in individuals with acute stroke. We hypothesized that (1) estimated prestroke peak VO 2 would be related to IGF-1 and IGFBP-3 and (2) individuals with higher than median IGF-1 levels will have higher estimated prestroke peak VO 2 compared to those with lower than median levels. Methods Fifteen individuals with acute stroke had blood sampled within 72 hours of hospital admission. Prestroke peak VO 2 was estimated using a nonexercise prediction equation. IGF-1 and IGFBP-3 levels were quantified using enzyme-linked immunoassay. Results Estimated prestroke peak VO 2 was significantly related to circulating IGF-1 levels (r = .60; P = .02) but not IGFBP-3. Individuals with higher than median IGF-1 (117.9 ng/mL) had significantly better estimated aerobic fitness (32.4 ± 6.9 mL kg -1 min -1 ) than those with lower than median IGF-1 (20.7 ± 7.8 mL kg -1 min -1 ; P = .03). Conclusions Improving aerobic fitness prior to stroke may be beneficial by increasing baseline IGF-1 levels. These results set the groundwork for future clinical trials to determine whether high IGF-1 and aerobic fitness are beneficial to stroke recovery by providing neuroprotection and improving function. © The Author(s) 2016.
Antarctic Circumpolar Current Transport Variability during 2003-05 from GRACE
NASA Technical Reports Server (NTRS)
Zlotnicki, Victor; Wahr, John; Fukumori, Ichiro; Song, Yuhe T.
2006-01-01
Gravity Recovery and Climate Experiment (GRACE) gravity data spanning January 2003 - November 2005 are used as proxies for ocean bottom pressure (BP) averaged over 1 month, spherical Gaussian caps 500 km in radius, and along paths bracketing the Antarctic Circumpolar Current's various fronts. The GRACE BP signals are compared with those derived from the Estimating the Circulation and Climate of the Ocean (ECCO) ocean modeling-assimilation system, and to a non-Boussinesq version of the Regional Ocean Model System (ROMS). The discrepancy found between GRACE and the models is 1.7 cm(sub H2O) (1 cm(sub H2O) similar to 1 hPa), slightly lower than the 1.9 cm(sub H2O) estimated by the authors independently from propagation of GRACE errors. The northern signals are weak and uncorrelated among basins. The southern signals are strong, with a common seasonality. The seasonal cycle GRACE data observed in the Pacific and Indian Ocean sectors of the ACC are consistent, with annual and semiannual amplitudes of 3.6 and 0.6 cm(sub H2O) (1.1 and 0.6 cm(sub H2O) with ECCO), the average over the full southern path peaks (stronger ACC) in the southern winter, on days of year 197 and 97 for the annual and semiannual components, respectively; the Atlantic Ocean annual peak is 20 days earlier. An approximate conversion factor of 3.1 Sv ( Sv equivalent to 10(exp 6) m(exp 3) s(exp -1)) of barotropic transport variability per cm(sub H2O) of BP change is estimated. Wind stress data time series from the Quick Scatterometer (QuikSCAT), averaged monthly, zonally, and over the latitude band 40 de - 65 deg S, are also constructed and subsampled at the same months as with the GRACE data. The annual and semiannual harmonics of the wind stress peak on days 198 and 82, respectively. A decreasing trend over the 3 yr is observed in the three data types.
Antarctic Circumpolar Current Transport Variability during 2003-05 from GRACE
NASA Technical Reports Server (NTRS)
Zlotnicki, Victor; Wahr, John; Fukumori, Ichiro; Song, Yuhe T.
2007-01-01
Gravity Recovery and Climate Experiment (GRACE) gravity data spanning January 2003-November 2005 are used as proxies for ocean bottom pressure (BP) averaged over 1 month, spherical Gaussian caps 500 km in radius, and along paths bracketing the Antarctic Circumpolar Current's various fronts. The GRACE BP signals are compared with those derived from the Estimating the Circulation and Climate of the Ocean (ECCO) ocean modeling-assimilation system, and to a non-Boussinesq version of the Regional Ocean Model System (ROMS). The discrepancy found between GRACE and the models is 1.7 cm
Return stroke velocities and currents using a solid state silicon detector system
NASA Technical Reports Server (NTRS)
Mach, Douglas M.; Rust, W. David
1988-01-01
A small, portable device has been developed to measure return stroke velocities. With the device, velocities from 135 strokes that consist of 92 natural return strokes and 43 triggered return strokes have been analyzed. The average return stroke velocity for longer channels, greater than 500 meters, is 1.2 + or - 0.3 x 10 to the 8th m/s for both natural and triggered return strokes. For shorter channel lengths, less than 500 m, natural lightning has a statistically higher average return stroke velocity of 1.9 + or - 0.7 x 10 to the 8th m/s than triggered lightning with an average return stroke velocity of 1.4 + or - 0.4 x 10 to the 8th m/s. Using the transmission line model of the return stroke, natural lightning has a peak current distribution that is log-normal with a median value of 19 kA. Return stroke velocities and currents were determined for two distant single stroke natural positive cloud-to-ground flashes. The velocities were 1.0 and 1.7 x 10 to the 8th ms/s while the estimated peak current for each positive flash was over 125 kA.
Williams-Sether, Tara
2015-08-06
Annual peak-flow frequency data from 231 U.S. Geological Survey streamflow-gaging stations in North Dakota and parts of Montana, South Dakota, and Minnesota, with 10 or more years of unregulated peak-flow record, were used to develop regional regression equations for exceedance probabilities of 0.5, 0.20, 0.10, 0.04, 0.02, 0.01, and 0.002 using generalized least-squares techniques. Updated peak-flow frequency estimates for 262 streamflow-gaging stations were developed using data through 2009 and log-Pearson Type III procedures outlined by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data. An average generalized skew coefficient was determined for three hydrologic zones in North Dakota. A StreamStats web application was developed to estimate basin characteristics for the regional regression equation analysis. Methods for estimating a weighted peak-flow frequency for gaged sites and ungaged sites are presented.
Van der Kloot, W
1988-01-01
1. Following motor nerve stimulation there is a period of greatly enhanced quantal release, called the early release period or ERP (Barrett & Stevens, 1972b). Until now, measurements of the probability of quantal releases at different points in the ERP have come from experiments in which quantal output was greatly reduced, so that the time of release of individual quanta could be detected or so that the latency to the release of the first quantum could be measured. 2. A method has been developed to estimate the timing of quantal release during the ERP that can be used at much higher levels of quantal output. The assumption is made that each quantal release generates an end-plate current (EPC) that rises instantaneously and then decays exponentially. The peak amplitude of the quantal currents and the time constant for their decay are measured from miniature end-plate currents (MEPCs). Then a number of EPCs are averaged, and the times of release of the individual quanta during the ERP estimated by a simple mathematical method for deconvolution derived by Cohen, Van der Kloot & Attwell (1981). 3. The deconvolution method was tested using data from preparations in high-Mg2+ low-Ca2+ solution. One test was to reconstitute the averaged EPCs from the estimated times of quantal release and the quantal currents, by using Fourier convolution. The reconstructions fit well to the originals. 4. Reconstructions were also made from averaged MEPCs which do not rise instantaneously and the estimated times of quantal release.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2466987
Naidoo, Rajen N; Robins, Thomas G; Becklake, Margaret; Seixas, Noah; Thompson, Mary Lou
2007-12-01
The objectives of this study were to determine whether cross-shift changes in peak expiratory flow rate (PEFR) were related to respirable dust exposure in South African coalminers. Fifty workers were randomly selected from a cohort of 684 miners from 3 bituminous coalmines in Mpumalanga, South Africa. Peak expiratory efforts were measured prior to the commencement of the shift, and at the end of the shift on at least two occasions separated by at least 2 weeks, with full shift personal dust sampling being conducted on each occasion for each participant. Interviews were conducted, work histories were obtained and cumulative exposure estimates were constructed. Regression models examined the associations of cross-shift changes in PEFR with current and cumulative exposure, controlling for shift, smoking and past history of tuberculosis. There were marginal differences in cross-shift PEFR (ranging from 0.1 to 2 L/min). Linear regression analyses showed no association between cross-shift change in PEFR and current or cumulative exposure. The specific shift worked by participants in the study showed no effect. Our study showed no association between current respirable dust exposure and cross-shift changes in PEFR. There was a non-significant protective effect of cumulative dust exposure on the outcome, suggesting the presence of a "healthy worker survivor effect" in this data.
Dyverfeldt, Petter; Hope, Michael D.; Tseng, Elaine E.; Saloner, David
2013-01-01
OBJECTIVES The authors sought to measure the turbulent kinetic energy (TKE) in the ascending aorta of patients with aortic stenosis and to assess its relationship to irreversible pressure loss. BACKGROUND Irreversible pressure loss caused by energy dissipation in post-stenotic flow is an important determinant of the hemodynamic significance of aortic stenosis. The simplified Bernoulli equation used to estimate pressure gradients often misclassifies the ventricular overload caused by aortic stenosis. The current gold standard for estimation of irreversible pressure loss is catheterization, but this method is rarely used due to its invasiveness. Post-stenotic pressure loss is largely caused by dissipation of turbulent kinetic energy into heat. Recent developments in magnetic resonance flow imaging permit noninvasive estimation of TKE. METHODS The study was approved by the local ethics review board and all subjects gave written informed consent. Three-dimensional cine magnetic resonance flow imaging was used to measure TKE in 18 subjects (4 normal volunteers, 14 patients with aortic stenosis with and without dilation). For each subject, the peak total TKE in the ascending aorta was compared with a pressure loss index. The pressure loss index was based on a previously validated theory relating pressure loss to measures obtainable by echocardiography. RESULTS The total TKE did not appear to be related to global flow patterns visualized based on magnetic resonance–measured velocity fields. The TKE was significantly higher in patients with aortic stenosis than in normal volunteers (p < 0.001). The peak total TKE in the ascending aorta was strongly correlated to index pressure loss (R2 = 0.91). CONCLUSIONS Peak total TKE in the ascending aorta correlated strongly with irreversible pressure loss estimated by a well-established method. Direct measurement of TKE by magnetic resonance flow imaging may, with further validation, be used to estimate irreversible pressure loss in aortic stenosis. PMID:23328563
Dyverfeldt, Petter; Hope, Michael D; Tseng, Elaine E; Saloner, David
2013-01-01
The authors sought to measure the turbulent kinetic energy (TKE) in the ascending aorta of patients with aortic stenosis and to assess its relationship to irreversible pressure loss. Irreversible pressure loss caused by energy dissipation in post-stenotic flow is an important determinant of the hemodynamic significance of aortic stenosis. The simplified Bernoulli equation used to estimate pressure gradients often misclassifies the ventricular overload caused by aortic stenosis. The current gold standard for estimation of irreversible pressure loss is catheterization, but this method is rarely used due to its invasiveness. Post-stenotic pressure loss is largely caused by dissipation of turbulent kinetic energy into heat. Recent developments in magnetic resonance flow imaging permit noninvasive estimation of TKE. The study was approved by the local ethics review board and all subjects gave written informed consent. Three-dimensional cine magnetic resonance flow imaging was used to measure TKE in 18 subjects (4 normal volunteers, 14 patients with aortic stenosis with and without dilation). For each subject, the peak total TKE in the ascending aorta was compared with a pressure loss index. The pressure loss index was based on a previously validated theory relating pressure loss to measures obtainable by echocardiography. The total TKE did not appear to be related to global flow patterns visualized based on magnetic resonance-measured velocity fields. The TKE was significantly higher in patients with aortic stenosis than in normal volunteers (p < 0.001). The peak total TKE in the ascending aorta was strongly correlated to index pressure loss (R(2) = 0.91). Peak total TKE in the ascending aorta correlated strongly with irreversible pressure loss estimated by a well-established method. Direct measurement of TKE by magnetic resonance flow imaging may, with further validation, be used to estimate irreversible pressure loss in aortic stenosis. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Over, Thomas M.; Saito, Riki J.; Veilleux, Andrea G.; Sharpe, Jennifer B.; Soong, David T.; Ishii, Audrey L.
2016-06-28
This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively) for watersheds in Illinois based on annual maximum peak discharge data from 117 watersheds in and near northeastern Illinois. One set of equations was developed through a temporal analysis with a two-step least squares-quantile regression technique that measures the average effect of changes in the urbanization of the watersheds used in the study. The resulting equations can be used to adjust rural peak discharge quantiles for the effect of urbanization, and in this study the equations also were used to adjust the annual maximum peak discharges from the study watersheds to 2010 urbanization conditions.The other set of equations was developed by a spatial analysis. This analysis used generalized least-squares regression to fit the peak discharge quantiles computed from the urbanization-adjusted annual maximum peak discharges from the study watersheds to drainage-basin characteristics. The peak discharge quantiles were computed by using the Expected Moments Algorithm following the removal of potentially influential low floods defined by a multiple Grubbs-Beck test. To improve the quantile estimates, regional skew coefficients were obtained from a newly developed regional skew model in which the skew increases with the urbanized land use fraction. The drainage-basin characteristics used as explanatory variables in the spatial analysis include drainage area, the fraction of developed land, the fraction of land with poorly drained soils or likely water, and the basin slope estimated as the ratio of the basin relief to basin perimeter.This report also provides the following: (1) examples to illustrate the use of the spatial and urbanization-adjustment equations for estimating peak discharge quantiles at ungaged sites and to improve flood-quantile estimates at and near a gaged site; (2) the urbanization-adjusted annual maximum peak discharges and peak discharge quantile estimates at streamgages from 181 watersheds including the 117 study watersheds and 64 additional watersheds in the study region that were originally considered for use in the study but later deemed to be redundant.The urbanization-adjustment equations, spatial regression equations, and peak discharge quantile estimates developed in this study will be made available in the web application StreamStats, which provides automated regression-equation solutions for user-selected stream locations. Figures and tables comparing the observed and urbanization-adjusted annual maximum peak discharge records by streamgage are provided at https://doi.org/10.3133/sir20165050 for download.
Regional equations for estimation of peak-streamflow frequency for natural basins in Texas
Asquith, William H.; Slade, Raymond M.
1997-01-01
Peak-streamflow frequency for 559 Texas stations with natural (unregulated and rural or nonurbanized) basins was estimated with annual peak-streamflow data through 1993. The peak-streamflow frequency and drainage-basin characteristics for the Texas stations were used to develop 16 sets of equations to estimate peak-streamflow frequency for ungaged natural stream sites in each of 11 regions in Texas. The relation between peak-streamflow frequency and contributing drainage area for 5 of the 11 regions is curvilinear, requiring that one set of equations be developed for drainage areas less than 32 square miles and another set be developed for drainage areas greater than 32 square miles. These equations, developed through multiple-regression analysis using weighted least squares, are based on the relation between peak-streamflow frequency and basin characteristics for streamflow-gaging stations. The regions represent areas with similar flood characteristics. The use and limitations of the regression equations also are discussed. Additionally, procedures are presented to compute the 50-, 67-, and 90-percent confidence limits for any estimation from the equations. Also, supplemental peak-streamflow frequency and basin characteristics for 105 selected stations bordering Texas are included in the report. This supplemental information will aid in interpretation of flood characteristics for sites near the state borders of Texas.
NASA Astrophysics Data System (ADS)
Sakanoi, T.; Fukunishi, H.; Mukai, T.
1995-10-01
The inverted-V field-aligned acceleration region existing in the altitude range of several thousand kilometers plays an essential role for the magnetosphere-ionosphere coupling system. The adiabatic plasma theory predicts a linear relationship between field-aligned current density (J∥) and parallel potential drop (Φ∥), that is, J∥=KΦ∥, where K is the field-aligned conductance. We examined this relationship using the charged particle and magnetic field data obtained from the Akebono (Exos D) satellite. The potential drop above the satellite was derived from the peak energy of downward electrons, while the potential drop below the satellite was derived from two different methods: the peak energy of upward ions and the energy-dependent widening of electron loss cone. On the other hand, field-aligned current densities in the inverted-V region were estimated from the Akebono magnetometer data. Using these potential drops and field-aligned current densities, we estimated the linear field-aligned conductance KJΦ. Further, we obtained the corrected field-aligned conductance KCJΦ by applying the full Knight's formula to the current-voltage relationship. We also independently estimated the field-aligned conductance KTN from the number density and the thermal temperature of magnetospheric source electrons which were obtained by fitting accelerated Maxwellian functions for precipitating electrons. The results are summarized as follows: (1) The latitudinal dependence of parallel potential drops is characterized by a narrow V-shaped structure with a width of 0.4°-1.0°. (2) Although the inverted-V potential region exactly corresponds to the upward field aligned current region, the latitudinal dependence of upward current intensity is an inverted-U shape rather than an inverted-V shape. Thus it is suggested that the field-aligned conductance KCJΦ changes with a V-shaped latitudinal dependence. In many cases, KCJΦ values at the edge of the inverted-V region are about 5-10 times larger than those at the center. (3) By comparing KCJΦ with KTN, KCJΦ is found to be about 2-20 times larger than KTN. These results suggest that low-energy electrons such as trapped electrons, secondary and back-scattered electrons, and ionospheric electrons significantly contribute to upward field-aligned currents in the inverted-V region. It is therefore inferred that non adiabatic pitch angle scattering processes play an important role in the inverted-V region. .
Cuff-Free Blood Pressure Estimation Using Pulse Transit Time and Heart Rate.
Wang, Ruiping; Jia, Wenyan; Mao, Zhi-Hong; Sclabassi, Robert J; Sun, Mingui
2014-10-01
It has been reported that the pulse transit time (PTT), the interval between the peak of the R-wave in electrocardiogram (ECG) and the fingertip photoplethysmogram (PPG), is related to arterial stiffness, and can be used to estimate the systolic blood pressure (SBP) and diastolic blood pressure (DBP). This phenomenon has been used as the basis to design portable systems for continuously cuff-less blood pressure measurement, benefiting numerous people with heart conditions. However, the PTT-based blood pressure estimation may not be sufficiently accurate because the regulation of blood pressure within the human body is a complex, multivariate physiological process. Considering the negative feedback mechanism in the blood pressure control, we introduce the heart rate (HR) and the blood pressure estimate in the previous step to obtain the current estimate. We validate this method using a clinical database. Our results show that the PTT, HR and previous estimate reduce the estimated error significantly when compared to the conventional PTT estimation approach (p<0.05).
Liu, Ya L; Liu, Kui; Yuan, Li Y; Chai, Zhi F; Shi, Wei Q
2016-08-15
In this work, the compositions of Ce-Al, Er-Al and La-Bi intermetallic compounds were estimated by the cyclic voltammetry (CV) technique. At first, CV measurements were carried out at different reverse potentials to study the co-reduction processes of Ce-Al, Er-Al and La-Bi systems. The CV curves obtained were then re-plotted with the current as a function of time, and the coulomb number of each peak was calculated. By comparing the coulomb number of the related peaks, the compositions of the Ce-Al, Er-Al and La-Bi intermetallic compounds formed in the co-reduction process could be estimated. The results showed that Al11Ce3, Al3Ce, Al2Ce and AlCe could be formed by the co-reduction of Ce(iii) and Al(iii). For the co-reduction of Er(iii) and Al(iii), Al3Er2, Al2Er and AlEr were formed. In a La(iii) and Bi(iii) co-existing system in LiCl-KCl melts, LaBi2, LaBi and Li3Bi were the major products as a result of co-reduction.
Role of peak current in conversion of patients with ventricular fibrillation.
Anantharaman, Venkataraman; Wan, Paul Weng; Tay, Seow Yian; Manning, Peter George; Lim, Swee Han; Chua, Siang Jin Terrance; Mohan, Tiru; Rabind, Antony Charles; Vidya, Sudarshan; Hao, Ying
2017-07-01
Peak currents are the final arbiter of defibrillation in patients with ventricular fibrillation (VF). However, biphasic defibrillators continue to use energy in joules for electrical conversion in hopes that their impedance compensation properties will address transthoracic impedance (TTI), which must be overcome when a fixed amount of energy is delivered. However, optimal peak currents for conversion of VF remain unclear. We aimed to determine the role of peak current and optimal peak levels for conversion in collapsed VF patients. Adult, non-pregnant patients presenting with non-traumatic VF were included in the study. All defibrillations that occurred were included. Impedance values during defibrillation were used to calculate peak current values. The endpoint was return of spontaneous circulation (ROSC). Of the 197 patients analysed, 105 had ROSC. Characteristics of patients with and without ROSC were comparable. Short duration of collapse < 10 minutes correlated positively with ROSC. Generally, patients with average or high TTI converted at lower peak currents. 25% of patients with high TTI converted at 13.3 ± 2.3 A, 22.7% with average TTI at 18.2 ± 2.5 A and 18.6% with low TTI at 27.0 ± 4.7 A (p = 0.729). Highest peak current conversions were at < 15 A and 15-20 A. Of the 44 patients who achieved first-shock ROSC, 33 (75.0%) received < 20 A peak current vs. > 20 A for the remaining 11 (25%) patients (p = 0.002). For best effect, priming biphasic defibrillators to deliver specific peak currents should be considered. Copyright: © Singapore Medical Association
Role of peak current in conversion of patients with ventricular fibrillation
Anantharaman, Venkataraman; Wan, Paul Weng; Tay, Seow Yian; Manning, Peter George; Lim, Swee Han; Chua, Siang Jin Terrance; Mohan, Tiru; Rabind, Antony Charles; Vidya, Sudarshan; Hao, Ying
2017-01-01
INTRODUCTION Peak currents are the final arbiter of defibrillation in patients with ventricular fibrillation (VF). However, biphasic defibrillators continue to use energy in joules for electrical conversion in hopes that their impedance compensation properties will address transthoracic impedance (TTI), which must be overcome when a fixed amount of energy is delivered. However, optimal peak currents for conversion of VF remain unclear. We aimed to determine the role of peak current and optimal peak levels for conversion in collapsed VF patients. METHODS Adult, non-pregnant patients presenting with non-traumatic VF were included in the study. All defibrillations that occurred were included. Impedance values during defibrillation were used to calculate peak current values. The endpoint was return of spontaneous circulation (ROSC). RESULTS Of the 197 patients analysed, 105 had ROSC. Characteristics of patients with and without ROSC were comparable. Short duration of collapse < 10 minutes correlated positively with ROSC. Generally, patients with average or high TTI converted at lower peak currents. 25% of patients with high TTI converted at 13.3 ± 2.3 A, 22.7% with average TTI at 18.2 ± 2.5 A and 18.6% with low TTI at 27.0 ± 4.7 A (p = 0.729). Highest peak current conversions were at < 15 A and 15–20 A. Of the 44 patients who achieved first-shock ROSC, 33 (75.0%) received < 20 A peak current vs. > 20 A for the remaining 11 (25%) patients (p = 0.002). CONCLUSION For best effect, priming biphasic defibrillators to deliver specific peak currents should be considered. PMID:28741007
Water use demand in the Crans-Montana-Sierre region (Switzerland)
NASA Astrophysics Data System (ADS)
Bonriposi, M.; Reynard, E.
2012-04-01
Crans-Montana-Sierre is an Alpine touristic region located in the driest area of Switzerland (Rhone River Valley, Canton of Valais), with both winter (ski) and summer (e.g. golf) tourist activities. Climate change as well as societal and economic development will in future significantly modify the supply and consumption of water and, consequently, may fuel conflicts of interest. Within the framework of the MontanAqua project (www.montanaqua.ch), we are researching more sustainable water management options based on the co-ordination and adaptation of water demand to water availability under changing biophysical and socioeconomic conditions. This work intends to quantify current water uses in the area and consider future scenarios (around 2050). We have focused upon the temporal and spatial characteristics of resource demand, in order to estimate the spatial footprint of water use (drinking water, hydropower production, irrigation and artificial snowmaking), in terms of system, infrastructure, and organisation of supply. We have then quantified these as precisely as possible (at the monthly temporal scale and at the municipality spatial scale). When the quantity of water was not measurable for practical reasons or for lack of data, as for the case for irrigation or snowmaking, an alternative approach was applied. Instead of quantifying how much water was used, the stress was put on the water needs for irrigating agricultural land or on the optimal meteorological conditions necessary to produce artificial snow. A huge summer peak and a smaller winter peak characterize the current regional water consumption estimation. The summer peak is mainly caused by irrigation and secondly by drinking water demand. The winter peak is essentially due to drinking water and snowmaking. Other consumption peaks exist at the municipality scale but they cannot be observed at the regional scale. The results show a major variation in water demand between the 11 concerned municipalities and between the various uses. All this confirms the necessity of modelling the future demand of water, which would allow prediction of possible future use conflicts. In a second phase of the project, the collected data will be introduced into WEAP (the Water Evaluation And Planning system) model, in order to estimate the future water demand of the Crans-Montana-Sierre region. This hydrologic model is distinct from most similar models because of its ability to integrate climate and socio-economic scenarios (Hansen, 1994). Reference Hansen, E. 1994. WEAP - A system for tackling water resource problems. In Water Management Europe 1993/94: An Annual Review of the European Water and Wastewater Industry. Stockholm Environment Institute: Stockholm.
Efstathiou, Christos; Isukapalli, Sastry
2011-01-01
Allergic airway diseases represent a complex health problem which can be exacerbated by the synergistic action of pollen particles and air pollutants such as ozone. Understanding human exposures to aeroallergens requires accurate estimates of the spatial distribution of airborne pollen levels as well as of various air pollutants at different times. However, currently there are no established methods for estimating allergenic pollen emissions and concentrations over large geographic areas such as the United States. A mechanistic modeling system for describing pollen emissions and transport over extensive domains has been developed by adapting components of existing regional scale air quality models and vegetation databases. First, components of the Biogenic Emissions Inventory System (BEIS) were adapted to predict pollen emission patterns. Subsequently, the transport module of the Community Multiscale Air Quality (CMAQ) modeling system was modified to incorporate description of pollen transport. The combined model, CMAQ-pollen, allows for simultaneous prediction of multiple air pollutants and pollen levels in a single model simulation, and uses consistent assumptions related to the transport of multiple chemicals and pollen species. Application case studies for evaluating the combined modeling system included the simulation of birch and ragweed pollen levels for the year 2002, during their corresponding peak pollination periods (April for birch and September for ragweed). The model simulations were driven by previously evaluated meteorological model outputs and emissions inventories for the eastern United States for the simulation period. A semi-quantitative evaluation of CMAQ-pollen was performed using tree and ragweed pollen counts in Newark, NJ for the same time periods. The peak birch pollen concentrations were predicted to occur within two days of the peak measurements, while the temporal patterns closely followed the measured profiles of overall tree pollen. For the case of ragweed pollen, the model was able to capture the patterns observed during September 2002, but did not predict an early peak; this can be associated with a wider species pollination window and inadequate spatial information in current land cover databases. An additional sensitivity simulation was performed to comparatively evaluate the dispersion patterns predicted by CMAQ-pollen with those predicted by the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model, which is used extensively in aerobiological studies. The CMAQ estimated concentration plumes matched the equivalent pollen scenario modeled with HYSPLIT. The novel pollen modeling approach presented here allows simultaneous estimation of multiple airborne allergens and other air pollutants, and is being developed as a central component of an integrated population exposure modeling system, the Modeling Environment for Total Risk studies (MENTOR) for multiple, co-occurring contaminants that include aeroallergens and irritants. PMID:21516207
NASA Astrophysics Data System (ADS)
Efstathiou, Christos; Isukapalli, Sastry; Georgopoulos, Panos
2011-04-01
Allergic airway diseases represent a complex health problem which can be exacerbated by the synergistic action of pollen particles and air pollutants such as ozone. Understanding human exposures to aeroallergens requires accurate estimates of the spatial distribution of airborne pollen levels as well as of various air pollutants at different times. However, currently there are no established methods for estimating allergenic pollen emissions and concentrations over large geographic areas such as the United States. A mechanistic modeling system for describing pollen emissions and transport over extensive domains has been developed by adapting components of existing regional scale air quality models and vegetation databases. First, components of the Biogenic Emissions Inventory System (BEIS) were adapted to predict pollen emission patterns. Subsequently, the transport module of the Community Multiscale Air Quality (CMAQ) modeling system was modified to incorporate description of pollen transport. The combined model, CMAQ-pollen, allows for simultaneous prediction of multiple air pollutants and pollen levels in a single model simulation, and uses consistent assumptions related to the transport of multiple chemicals and pollen species. Application case studies for evaluating the combined modeling system included the simulation of birch and ragweed pollen levels for the year 2002, during their corresponding peak pollination periods (April for birch and September for ragweed). The model simulations were driven by previously evaluated meteorological model outputs and emissions inventories for the eastern United States for the simulation period. A semi-quantitative evaluation of CMAQ-pollen was performed using tree and ragweed pollen counts in Newark, NJ for the same time periods. The peak birch pollen concentrations were predicted to occur within two days of the peak measurements, while the temporal patterns closely followed the measured profiles of overall tree pollen. For the case of ragweed pollen, the model was able to capture the patterns observed during September 2002, but did not predict an early peak; this can be associated with a wider species pollination window and inadequate spatial information in current land cover databases. An additional sensitivity simulation was performed to comparatively evaluate the dispersion patterns predicted by CMAQ-pollen with those predicted by the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model, which is used extensively in aerobiological studies. The CMAQ estimated concentration plumes matched the equivalent pollen scenario modeled with HYSPLIT. The novel pollen modeling approach presented here allows simultaneous estimation of multiple airborne allergens and other air pollutants, and is being developed as a central component of an integrated population exposure modeling system, the Modeling Environment for Total Risk studies (MENTOR) for multiple, co-occurring contaminants that include aeroallergens and irritants.
Peak-flow frequency estimates through 1994 for gaged streams in South Dakota
Burr, M.J.; Korkow, K.L.
1996-01-01
Annual peak-flow data are listed for 250 continuous-record and crest-stage gaging stations in South Dakota. Peak-flow frequency estimates for selected recurrence intervals ranging from 2 to 500 years are given for 234 of these 250 stations. The log-Pearson Type III procedure was used to compute the frequency relations for the 234 stations, which in 1994 included 105 active and 129 inactive stations. The log-Pearson Type III procedure is recommended by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data, 1982, "Guidelines for Determining Flood Flow Frequency."No peak-flow frequency estimates are given for 16 of the 250 stations because: (1) of extreme variability in data set; (2) more than 20 percent of years had no flow; (3) annual peak flows represent large outflow from a spring; (4) of insufficient peak-flow record subsequent to reservoir regulation; and (5) peak-flow records were combined with records from nearby stations.
Flooding in the Northeastern United States, 2011
Suro, Thomas P.; Roland, Mark A.; Kiah, Richard G.
2015-12-31
The annual exceedance probability (AEP) for 327 streamgages in the Northeastern United States were computed using annual peak streamflow data through 2011 and are included in this report. The 2011 peak streamflow for 129 of those streamgages was estimated to have an AEP of less than or equal to 1 percent. Almost 100 of these peak streamflows were a result of the flooding associated with Hurricane Irene in late August 2011. More extreme than the 1-percent AEP, is the 0.2-percent AEP. The USGS recorded peak streamflows at 31 streamgages that equaled or exceeded the estimated 0.2-percent AEP during 2011. Collectively, the USGS recorded peak streamflows having estimated AEPs of less than 1 percent in Connecticut, Delaware, Maine, Maryland, Massachusetts, Ohio, Pennsylvania, New Hampshire, New Jersey, New York, and Vermont and new period-of-record peak streamflows were recorded at more than 180 streamgages resulting from the floods of 2011.
Time-frequency peak filtering for random noise attenuation of magnetic resonance sounding signal
NASA Astrophysics Data System (ADS)
Lin, Tingting; Zhang, Yang; Yi, Xiaofeng; Fan, Tiehu; Wan, Ling
2018-05-01
When measuring in a geomagnetic field, the method of magnetic resonance sounding (MRS) is often limited because of the notably low signal-to-noise ratio (SNR). Most current studies focus on discarding spiky noise and power-line harmonic noise cancellation. However, the effects of random noise should not be underestimated. The common method for random noise attenuation is stacking, but collecting multiple recordings merely to suppress random noise is time-consuming. Moreover, stacking is insufficient to suppress high-level random noise. Here, we propose the use of time-frequency peak filtering for random noise attenuation, which is performed after the traditional de-spiking and power-line harmonic removal method. By encoding the noisy signal with frequency modulation and estimating the instantaneous frequency using the peak of the time-frequency representation of the encoded signal, the desired MRS signal can be acquired from only one stack. The performance of the proposed method is tested on synthetic envelope signals and field data from different surveys. Good estimations of the signal parameters are obtained at different SNRs. Moreover, an attempt to use the proposed method to handle a single recording provides better results compared to 16 stacks. Our results suggest that the number of stacks can be appropriately reduced to shorten the measurement time and improve the measurement efficiency.
Mastin, M.C.; Kresch, D.L.
2005-01-01
The 1921 peak discharge at Skagit River near Concrete, Washington (U.S. Geological Survey streamflow-gaging station 12194000), was verified using peak-discharge data from the flood of October 21, 2003, the largest flood since 1921. This peak discharge is critical to determining other high discharges at the gaging station and to reliably estimating the 100-year flood, the primary design flood being used in a current flood study of the Skagit River basin. The four largest annual peak discharges of record (1897, 1909, 1917, and 1921) were used to determine the 100-year flood discharge at Skagit River near Concrete. The peak discharge on December 13, 1921, was determined by James E. Stewart of the U.S. Geological Survey using a slope-area measurement and a contracted-opening measurement. An extended stage-discharge rating curve based on the 1921 peak discharge was used to determine the peak discharges of the three other large floods. Any inaccuracy in the 1921 peak discharge also would affect the accuracies of the three other largest peak discharges. The peak discharge of the 1921 flood was recalculated using the cross sections and high-water marks surveyed after the 1921 flood in conjunction with a new estimate of the channel roughness coefficient (n value) based on an n-verification analysis of the peak discharge of the October 21, 2003, flood. The n value used by Stewart for his slope-area measurement of the 1921 flood was 0.033, and the corresponding calculated peak discharge was 240,000 cubic feet per second (ft3/s). Determination of a single definitive water-surface profile for use in the n-verification analysis was precluded because of considerable variation in elevations of surveyed high-water marks from the flood on October 21, 2003. Therefore, n values were determined for two separate water-surface profiles thought to bracket a plausible range of water-surface slopes defined by high-water marks. The n value determined using the flattest plausible slope was 0.024 and the corresponding recalculated discharge of the 1921 slope-area measurement was 266,000 ft3/s. The n value determined using the steepest plausible slope was 0.032 and the corresponding recalculated discharge of the 1921 slope-area measurement was 215,000 ft3/s. The two recalculated discharges were 10.8 percent greater than (flattest slope) and 10.4 percent less than (steepest slope) the 1921 peak discharge of 240,000 ft3/s. The 1921 peak discharge was not revised because the average of the two recalculated discharges (240,500 ft3/s) is only 0.2 percent greater than the 1921 peak discharge.
Peak fitting and integration uncertainties for the Aerodyne Aerosol Mass Spectrometer
NASA Astrophysics Data System (ADS)
Corbin, J. C.; Othman, A.; Haskins, J. D.; Allan, J. D.; Sierau, B.; Worsnop, D. R.; Lohmann, U.; Mensah, A. A.
2015-04-01
The errors inherent in the fitting and integration of the pseudo-Gaussian ion peaks in Aerodyne High-Resolution Aerosol Mass Spectrometers (HR-AMS's) have not been previously addressed as a source of imprecision for these instruments. This manuscript evaluates the significance of these uncertainties and proposes a method for their estimation in routine data analysis. Peak-fitting uncertainties, the most complex source of integration uncertainties, are found to be dominated by errors in m/z calibration. These calibration errors comprise significant amounts of both imprecision and bias, and vary in magnitude from ion to ion. The magnitude of these m/z calibration errors is estimated for an exemplary data set, and used to construct a Monte Carlo model which reproduced well the observed trends in fits to the real data. The empirically-constrained model is used to show that the imprecision in the fitted height of isolated peaks scales linearly with the peak height (i.e., as n1), thus contributing a constant-relative-imprecision term to the overall uncertainty. This constant relative imprecision term dominates the Poisson counting imprecision term (which scales as n0.5) at high signals. The previous HR-AMS uncertainty model therefore underestimates the overall fitting imprecision. The constant relative imprecision in fitted peak height for isolated peaks in the exemplary data set was estimated as ~4% and the overall peak-integration imprecision was approximately 5%. We illustrate the importance of this constant relative imprecision term by performing Positive Matrix Factorization (PMF) on a~synthetic HR-AMS data set with and without its inclusion. Finally, the ability of an empirically-constrained Monte Carlo approach to estimate the fitting imprecision for an arbitrary number of known overlapping peaks is demonstrated. Software is available upon request to estimate these error terms in new data sets.
Technique for simulating peak-flow hydrographs in Maryland
Dillow, Jonathan J.A.
1998-01-01
The efficient design and management of many bridges, culverts, embankments, and flood-protection structures may require the estimation of time-of-inundation and (or) storage of floodwater relating to such structures. These estimates can be made on the basis of information derived from the peak-flow hydrograph. Average peak-flow hydrographs corresponding to a peak discharge of specific recurrence interval can be simulated for drainage basins having drainage areas less than 500 square miles in Maryland, using a direct technique of known accuracy. The technique uses dimensionless hydrographs in conjunction with estimates of basin lagtime and instantaneous peak flow. Ordinary least-squares regression analysis was used to develop an equation for estimating basin lagtime in Maryland. Drainage area, main channel slope, forest cover, and impervious area were determined to be the significant explanatory variables necessary to estimate average basin lagtime at the 95-percent confidence interval. Qualitative variables included in the equation adequately correct for geographic bias across the State. The average standard error of prediction associated with the equation is approximated as plus or minus (+/-) 37.6 percent. Volume correction factors may be applied to the basin lagtime on the basis of a comparison between actual and estimated hydrograph volumes prior to hydrograph simulation. Three dimensionless hydrographs were developed and tested using data collected during 278 significant rainfall-runoff events at 81 stream-gaging stations distributed throughout Maryland and Delaware. The data represent a range of drainage area sizes and basin conditions. The technique was verified by applying it to the simulation of 20 peak-flow events and comparing actual and simulated hydrograph widths at 50 and 75 percent of the observed peak-flow levels. The events chosen are considered extreme in that the average recurrence interval of the selected peak flows is 130 years. The average standard errors of prediction were +/- 61 and +/- 56 percent at the 50 and 75 percent of peak-flow hydrograph widths, respectively.
Gutiérrez, Manuel; Monzó, Jorge
2012-01-01
The purpose of this investigation was to determine the association between prevalence of low back disorders in female workers and biomechanical demands of compressive and shear forces at the lumbar spine. A descriptive, cross-sectional and correlational study was carried out in 11 groups of female workers in the Province of Concepción. An interview was performed to investigate the prevalence of low back pain. To estimate biomechanical demands on the lumbar spine, it was used the 3DSSPP software. The Pearson correlation coefficient between the prevalence of low back disorders and peak compression force at the lumbar spine was r = (p<0.005). The Spearman correlation coefficient between the prevalence of low back disorders and peak shear force was r = 0.9 (p <0.005). To protect 90% of female workers studied, the limits of compression and shear forces should be at 2.8 kN and 0.3 kN, respectively. These values differ from the recommendations currently used, 3.4 kN for peak compression force and 0.5 kN for peak shear force.
Bradley, D. Nathan
2012-01-01
The slope-area method is a technique for estimating the peak discharge of a flood after the water has receded (Dalrymple and Benson, 1967). This type of discharge estimate is called an “indirect measurement” because it relies on evidence left behind by the flood, such as high-water marks (HWMs) on trees or buildings. These indicators of flood stage are combined with measurements of the cross-sectional geometry of the stream, estimates of channel roughness, and a mathematical model that balances the total energy of the flow between cross sections. This is in contrast to a “direct” measurement of discharge during the flood where cross-sectional area is measured and a current meter or acoustic equipment is used to measure the water velocity. When a direct discharge measurement cannot be made at a gage during high flows because of logistics or safety reasons, an indirect measurement of a peak discharge is useful for defining the high-flow section of the stage-discharge relation (rating curve) at the stream gage, resulting in more accurate computation of high flows. The Slope-Area Computation program (SAC; Fulford, 1994) is an implementation of the slope-area method that computes a peak-discharge estimate from inputs of water-surface slope (from surveyed HWMs), channel geometry, and estimated channel roughness. SAC is a command line program written in Fortran that reads input data from a formatted text file and prints results to another formatted text file. Preparing the input file can be time-consuming and prone to errors. This document describes the SAC graphical user interface (GUI), a crossplatform “wrapper” application that prepares the SAC input file, executes the program, and helps the user interpret the output. The SAC GUI is an update and enhancement of the slope-area method (SAM; Hortness, 2004; Berenbrock, 1996), an earlier spreadsheet tool used to aid field personnel in the completion of a slope-area measurement. The SAC GUI reads survey data, develops a plan-view plot, water-surface profile, cross-section plots, and develops the SAC input file. The SAC GUI also develops HEC-2 files that can be imported into HEC–RAS.
1994-01-01
Dosimetry : Analysis of dosimetry in two dewar/liquid nitrogen systems. TIME Estimate: One hour for setup, irradiation and TLD reading/analysis. IV...point indicates both electron and hole trapping at the boundary ........................ 12 3.3 Relationship between current and dose for irradiated...peak value. Carriers are collected across the vertical junction within a diffusion length. Since the electron diffusion length is much larger than for
Paretti, Nicholas V.; Kennedy, Jeffrey R.; Cohn, Timothy A.
2014-01-01
Flooding is among the costliest natural disasters in terms of loss of life and property in Arizona, which is why the accurate estimation of flood frequency and magnitude is crucial for proper structural design and accurate floodplain mapping. Current guidelines for flood frequency analysis in the United States are described in Bulletin 17B (B17B), yet since B17B’s publication in 1982 (Interagency Advisory Committee on Water Data, 1982), several improvements have been proposed as updates for future guidelines. Two proposed updates are the Expected Moments Algorithm (EMA) to accommodate historical and censored data, and a generalized multiple Grubbs-Beck (MGB) low-outlier test. The current guidelines use a standard Grubbs-Beck (GB) method to identify low outliers, changing the determination of the moment estimators because B17B uses a conditional probability adjustment to handle low outliers while EMA censors the low outliers. B17B and EMA estimates are identical if no historical information or censored or low outliers are present in the peak-flow data. EMA with MGB (EMA-MGB) test was compared to the standard B17B (B17B-GB) method for flood frequency analysis at 328 streamgaging stations in Arizona. The methods were compared using the relative percent difference (RPD) between annual exceedance probabilities (AEPs), goodness-of-fit assessments, random resampling procedures, and Monte Carlo simulations. The AEPs were calculated and compared using both station skew and weighted skew. Streamgaging stations were classified by U.S. Geological Survey (USGS) National Water Information System (NWIS) qualification codes, used to denote historical and censored peak-flow data, to better understand the effect that nonstandard flood information has on the flood frequency analysis for each method. Streamgaging stations were also grouped according to geographic flood regions and analyzed separately to better understand regional differences caused by physiography and climate. The B17B-GB and EMA-MGB RPD-boxplot results showed that the median RPDs across all streamgaging stations for the 10-, 1-, and 0.2-percent AEPs, computed using station skew, were approximately zero. As the AEP flow estimates decreased (that is, from 10 to 0.2 percent AEP) the variability in the RPDs increased, indicating that the AEP flow estimate was greater for EMA-MGB when compared to B17B-GB. There was only one RPD greater than 100 percent for the 10- and 1-percent AEP estimates, whereas 19 RPDs exceeded 100 percent for the 0.2-percent AEP. At streamgaging stations with low-outlier data, historical peak-flow data, or both, RPDs ranged from −84 to 262 percent for the 0.2-percent AEP flow estimate. When streamgaging stations were separated by the presence of historical peak-flow data (that is, no low outliers or censored peaks) or by low outlier peak-flow data (no historical data), the results showed that RPD variability was greatest for the 0.2-AEP flow estimates, indicating that the treatment of historical and (or) low-outlier data was different between methods and that method differences were most influential when estimating the less probable AEP flows (1, 0.5, and 0.2 percent). When regional skew information was weighted with the station skew, B17B-GB estimates were generally higher than the EMA-MGB estimates for any given AEP. This was related to the different regional skews and mean square error used in the weighting procedure for each flood frequency analysis. The B17B-GB weighted skew analysis used a more positive regional skew determined in USGS Water Supply Paper 2433 (Thomas and others, 1997), while the EMA-MGB analysis used a more negative regional skew with a lower mean square error determined from a Bayesian generalized least squares analysis. Regional groupings of streamgaging stations reflected differences in physiographic and climatic characteristics. Potentially influential low flows (PILFs) were more prevalent in arid regions of the State, and generally AEP flows were larger with EMA-MGB than with B17B-GB for gaging stations with PILFs. In most cases EMA-MGB curves would fit the largest floods more accurately than B17B-GB. In areas of the State with more baseflow, such as along the Mogollon Rim and the White Mountains, streamgaging stations generally had fewer PILFs and more positive skews, causing estimated AEP flows to be larger with B17B-GB than with EMA-MGB. The effect of including regional skew was similar for all regions, and the observed pattern was increasingly greater B17B-GB flows (more negative RPDs) with each decreasing AEP quantile. A variation on a goodness-of-fit test statistic was used to describe each method’s ability to fit the largest floods. The mean absolute percent difference between the measured peak flows and the log-Pearson Type 3 (LP3)-estimated flows, for each method, was averaged over the 90th, 75th, and 50th percentiles of peak-flow data at each site. In most percentile subsets, EMA-MGB on average had smaller differences (1 to 3 percent) between the observed and fitted value, suggesting that the EMA-MGB-LP3 distribution is fitting the observed peak-flow data more precisely than B17B-GB. The smallest EMA-MGB percent differences occurred for the greatest 10 percent (90th percentile) of the peak-flow data. When stations were analyzed by USGS NWIS peak flow qualification code groups, the stations with historical peak flows and no low outliers had average percent differences as high as 11 percent greater for B17B-GB, indicating that EMA-MGB utilized the historical information to fit the largest observed floods more accurately. A resampling procedure was used in which 1,000 random subsamples were drawn, each comprising one-half of the observed data. An LP3 distribution was fit to each subsample using B17B-GB and EMA-MGB methods, and the predicted 1-percent AEP flows were compared to those generated from distributions fit to the entire dataset. With station skew, the two methods were similar in the median percent difference, but with weighted skew EMA-MGB estimates were generally better. At two gages where B17B-GB appeared to perform better, a large number of peak flows were deemed to be PILFs by the MGB test, although they did not appear to depart significantly from the trend of the data (step or dogleg appearance). At two gages where EMA-MGB performed better, the MGB identified several PILFs that were affecting the fitted distribution of the B17B-GB method. Monte Carlo simulations were run for the LP3 distribution using different skews and with different assumptions about the expected number of historical peaks. The primary benefit of running Monte Carlo simulations is that the underlying distribution statistics are known, meaning that the true 1-percent AEP is known. The results showed that EMA-MGB performed as well or better in situations where the LP3 distribution had a zero or positive skew and historical information. When the skew for the LP3 distribution was negative, EMA-MGB performed significantly better than B17B-GB and EMA-MGB estimates were less biased by more closely estimating the true 1-percent AEP for 1, 2, and 10 historical flood scenarios.
Chen, Bihua; Yu, Tao; Ristagno, Giuseppe; Quan, Weilun; Li, Yongqin
2014-10-01
Defibrillation current has been shown to be a clinically more relevant dosing unit than energy. However, the effects of average and peak current in determining shock outcome are still undetermined. The aim of this study was to investigate the relationship between average current, peak current and defibrillation success when different biphasic waveforms were employed. Ventricular fibrillation (VF) was electrically induced in 22 domestic male pigs. Animals were then randomized to receive defibrillation using one of two different biphasic waveforms. A grouped up-and-down defibrillation threshold-testing protocol was used to maintain the average success rate of 50% in the neighborhood. In 14 animals (Study A), defibrillations were accomplished with either biphasic truncated exponential (BTE) or rectilinear biphasic waveforms. In eight animals (Study B), shocks were delivered using two BTE waveforms that had identical peak current but different waveform durations. Both average and peak currents were associated with defibrillation success when BTE and rectilinear waveforms were investigated. However, when pathway impedance was less than 90Ω for the BTE waveform, bivariate correlation coefficient was 0.36 (p=0.001) for the average current, but only 0.21 (p=0.06) for the peak current in Study A. In Study B, a high defibrillation success (67.9% vs. 38.8%, p<0.001) was observed when the waveform delivered more average current (14.9±2.1A vs. 13.5±1.7A, p<0.001) while keeping the peak current unchanged. In this porcine model of VF, average current was better than peak current to be an adequate parameter to describe the therapeutic dosage when biphasic defibrillation waveforms were used. The institutional protocol number: P0805. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A High Peak Current Source for the CEBAF Injector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yunn, Byung; Sinclair, Charles; Krafft, Geoffrey
1992-07-01
The CEBAF accelerator can drive high power IR and UV FELs, if a high peak current source is added to the existing front end. We present a design for a high peak current injector which is compatible with simultaneous operation of the accelerator for cw nulear physics (NP) beam. The high peak current injector provides 60 A peak current in 2 psec long bunches carrying 120 pC charge at 7.485 MHz. At 10 MeV that beam is combined with 5 MeV NP beam (0.13pC, 2 psec long bunches at 1497 MHz) in an energy combination chicane for simultaneous acceleration inmore » the injector linac. The modifications to the low-energy NP transport are described. Results of optical and beam dynamics calculations for both high peak current and NP beams in combined operation are presented.« less
Lunt, Heather; Roiz De Sa, Daniel; Roiz De Sa, Julia; Allsopp, Adrian
2013-07-01
To provide an accurate estimate of peak oxygen uptake (VO2 peak) for British Royal Navy Personnel aged between 18 and 39, comparing a gold standard treadmill based maximal exercise test with a submaximal one-mile walk test. Two hundred military personnel consented to perform a treadmill-based VO2 peak test and two one-mile walk tests round an athletics track. The estimated VO2 peak values from three different one-mile walk equations were compared to directly measured VO2 peak values from the treadmill-based test. One hundred participants formed a validation group from which a new equation was derived and the other 100 participants formed the cross-validation group. Existing equations underestimated the VO2 peak values of the fittest personnel and overestimated the VO2 peak of the least aerobically fit by between 2% and 18%. The new equation derived from the validation group has less bias, the highest correlation with the measured values (r = 0.83), and classified the most people correctly according to the Royal Navy's Fitness Test standards, producing the fewest false positives and false negatives combined (9%). The new equation will provide a more accurate estimate of VO2 peak for a British military population aged 18 to 39. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Influence of Sample Size of Polymer Materials on Aging Characteristics in the Salt Fog Test
NASA Astrophysics Data System (ADS)
Otsubo, Masahisa; Anami, Naoya; Yamashita, Seiji; Honda, Chikahisa; Takenouchi, Osamu; Hashimoto, Yousuke
Polymer insulators have been used in worldwide because of some superior properties; light weight, high mechanical strength, good hydrophobicity etc., as compared with porcelain insulators. In this paper, effect of sample size on the aging characteristics in the salt fog test is examined. Leakage current was measured by using 100 MHz AD board or 100 MHz digital oscilloscope and separated three components as conductive current, corona discharge current and dry band arc discharge current by using FFT and the current differential method newly proposed. Each component cumulative charge was estimated automatically by a personal computer. As the results, when the sample size increased under the same average applied electric field, the peak values of leakage current and each component current increased. Especially, the cumulative charges and the arc discharge length of dry band arc discharge increased remarkably with the increase of gap length.
Flood frequency estimates and documented and potential extreme peak discharges in Oklahoma
Tortorelli, Robert L.; McCabe, Lan P.
2001-01-01
Knowledge of the magnitude and frequency of floods is required for the safe and economical design of highway bridges, culverts, dams, levees, and other structures on or near streams; and for flood plain management programs. Flood frequency estimates for gaged streamflow sites were updated, documented extreme peak discharges for gaged and miscellaneous measurement sites were tabulated, and potential extreme peak discharges for Oklahoma streamflow sites were estimated. Potential extreme peak discharges, derived from the relation between documented extreme peak discharges and contributing drainage areas, can provide valuable information concerning the maximum peak discharge that could be expected at a stream site. Potential extreme peak discharge is useful in conjunction with flood frequency analysis to give the best evaluation of flood risk at a site. Peak discharge and flood frequency for selected recurrence intervals from 2 to 500 years were estimated for 352 gaged streamflow sites. Data through 1999 water year were used from streamflow-gaging stations with at least 8 years of record within Oklahoma or about 25 kilometers into the bordering states of Arkansas, Kansas, Missouri, New Mexico, and Texas. These sites were in unregulated basins, and basins affected by regulation, urbanization, and irrigation. Documented extreme peak discharges and associated data were compiled for 514 sites in and near Oklahoma, 352 with streamflow-gaging stations and 162 at miscellaneous measurements sites or streamflow-gaging stations with short record, with a total of 671 measurements.The sites are fairly well distributed statewide, however many streams, large and small, have never been monitored. Potential extreme peak-discharge curves were developed for streamflow sites in hydrologic regions of the state based on documented extreme peak discharges and the contributing drainage areas. Two hydrologic regions, east and west, were defined using 98 degrees 15 minutes longitude as the dividing line.
Prediction of Maximal Aerobic Capacity in Severely Burned Children
Porro, Laura; Rivero, Haidy G.; Gonzalez, Dante; Tan, Alai; Herndon, David N.; Suman, Oscar E.
2011-01-01
Introduction Maximal oxygen uptake (VO2 peak) is an indicator of cardiorespiratory fitness, but requires expensive equipment and a relatively high technical skill level. Purpose The aim of this study is to provide a formula for estimating VO2 peak in burned children, using information obtained without expensive equipment. Methods Children, with ≥40% total surface area burned (TBSA), underwent a modified Bruce treadmill test to asses VO2 peak at 6 months after injury. We recorded gender, age, %TBSA, %3rd degree burn, height, weight, treadmill time, maximal speed, maximal grade, and peak heart rate, and applied McHenry’s select algorithm to extract important independent variables and Robust multiple regression to establish prediction equations. Results 42 children; 7 to 17 years old were tested. Robust multiple regression model provided the equation: VO2=10.33 – 0.62 *Age (years) + 1.88 * Treadmill Time (min) + 2.3 (gender; Females = 0, Males = 1). The correlation between measured and estimated VO2 peak was R=0.80. We then validated the equation with a group of 33 burned children, which yielded a correlation between measured and estimated VO2 peak of R=0.79. Conclusions Using only a treadmill and easily gathered information, VO2 peak can be estimated in children with burns. PMID:21316155
NASA Astrophysics Data System (ADS)
Gillies, D. M.; Knudsen, D. J.; Donovan, E.; Jackel, B. J.; Gillies, R.; Spanswick, E.
2017-12-01
We compare field-aligned currents (FACs) measured by the Swarm constellation of satellites with the location of red-line (630 nm) auroral arcs observed by all-sky imagers (ASIs) to derive a characteristic emission height for the optical emissions. In our 10 events we find that an altitude of 200 km applied to the ASI maps gives optimal agreement between the two observations. We also compare the new FAC method against the traditional triangulation method using pairs of all-sky imagers (ASIs), and against electron density profiles obtained from the Resolute Bay Incoherent Scatter Radar-Canadian radar (RISR-C), both of which are consistent with a characteristic emission height of 200 km. We also present the spatial error associated with georeferencing REdline Geospace Observatory (REGO) and THEMIS all-sky imagers (ASIs) and how it applies to altitude projections of the mapped image. Utilizing this error we validate the estimated altitude of redline aurora using two methods: triangulation between ASIs and field-aligned current profiles derived from magnetometers on-board the Swarm satellites.
Estimation of traveltime and longitudinal dispersion in streams in West Virginia
Wiley, Jeffrey B.; Messinger, Terence
2013-01-01
Traveltime and dispersion data are important for understanding and responding to spills of contaminants in waterways. The U.S. Geological Survey (USGS), in cooperation with West Virginia Bureau for Public Health, Office of Environmental Health Services, compiled and evaluated traveltime and longitudinal dispersion data representative of many West Virginia waterways. Traveltime and dispersion data were not available for streams in the northwestern part of the State. Compiled data were compared with estimates determined from national equations previously published by the USGS. The evaluation summarized procedures and examples for estimating traveltime and dispersion on streams in West Virginia. National equations developed by the USGS can be used to predict traveltime and dispersion for streams located in West Virginia, but the predictions will be less accurate than those made with graphical interpolation between measurements. National equations for peak concentration, velocity of the peak concentration, and traveltime of the leading edge had root mean square errors (RMSE) of 0.426 log units (127 percent), 0.505 feet per second (ft/s), and 3.78 hours (h). West Virginia data fit the national equations for peak concentration, velocity of the peak concentration, and traveltime of the leading edge with RMSE of 0.139 log units (38 percent), 0.630 ft/s, and 3.38 h, respectively. The national equation for maximum possible velocity of the peak concentration exceeded 99 percent and 100 percent of observed values from the national data set and West Virginia-only data set, respectively. No RMSE was reported for time of passage of a dye cloud, as estimated using the national equation; however, the estimates made using the national equations had a root mean square error of 3.82 h when compared to data gathered for this study. Traveltime and dispersion estimates can be made from the plots of traveltime as a function of streamflow and location for streams with plots available, but estimates can be made using the national equations for streams without plots. The estimating procedures are not valid for regulated stream reaches that were not individually studied or streamflows outside the limits studied. Rapidly changing streamflow and inadequate mixing across the stream channel affect traveltime and dispersion, and reduce the accuracy of estimates. Increases in streamflow typically result in decreases in the peak concentration and traveltime of the peak concentration. Decreases in streamflow typically result in increases in the peak concentration and traveltime of the peak concentration. Traveltimes will likely be less than those determined using the estimating equations and procedures if the spill is in the center of the stream, and traveltimes will likely be greater than those determined using the estimating equations and procedures if the spill is near the streambank.
Calculating weighted estimates of peak streamflow statistics
Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.
2012-01-01
According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.
Mandic, Sandra; Walker, Robert; Stevens, Emily; Nye, Edwin R; Body, Dianne; Barclay, Leanne; Williams, Michael J A
2013-01-01
Compared with symptom-limited cardiopulmonary exercise test (CPET), timed walking tests are cheaper, well-tolerated and simpler alternative for assessing exercise capacity in coronary artery disease (CAD) patients. We developed multivariate models for predicting peak oxygen consumption (VO2peak) from 6-minute walk test (6MWT) distance and peak shuttle walk speed for elderly stable CAD patients. Fifty-eight CAD patients (72 SD 6 years, 66% men) completed: (1) CPET with expired gas analysis on a cycle ergometer, (2) incremental 10-meter shuttle walk test, (3) two 6MWTs, (4) anthropometric assessment and (5) 30-second chair stands. Linear regression models were developed for estimating VO2peak from 6MWT distance and peak shuttle walk speed as well as demographic, anthropometric and functional variables. Measured VO2peak was significantly related to 6MWT distance (r = 0.719, p < 0.001) and peak shuttle walk speed (r = 0.717, p < 0.001). The addition of demographic (age, gender), anthropometric (height, weight, body mass index, body composition) and functional characteristics (30-second chair stands) increased the accuracy of predicting VO2peak from both 6MWT distance and peak shuttle walk speed (from 51% to 73% of VO2peak variance explained). Addition of demographic, anthropometric and functional characteristics improves the accuracy of VO2peak estimate based on walking tests in elderly individuals with stable CAD. Implications for Rehabilitation Timed walking tests are cheaper, well-tolerated and simpler alternative for assessing exercise capacity in cardiac patients. Walking tests could be used to assess individual's functional capacity and response to therapeutic interventions when symptom-limited cardiopulmonary exercise testing is not practical or not necessary for clinical reasons. Addition of demographic, anthropometric and functional characteristics improves the accuracy of peak oxygen consumption estimate based on 6-minute walk test distance and peak shuttle walk speed in elderly patients with coronary artery disease.
Methods for accurate estimation of net discharge in a tidal channel
Simpson, M.R.; Bland, R.
2000-01-01
Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second, or less than 0.5% of a typical peak tidal discharge rate of 750 cubic meters per second.
Peak-picking fundamental period estimation for hearing prostheses.
Howard, D M
1989-09-01
A real-time peak-picking fundamental period estimation device is described which is used in advanced hearing prostheses for the totally and profoundly deafened. The operation of the peak picker is compared with three well-established fundamental frequency estimation techniques: the electrolaryngograph, which is used as a "standard" hardware implementations of the cepstral technique, and the Gold/Rabiner parallel processing algorithm. These comparisons illustrate and highlight some of the important advantages and disadvantages that characterize the operation of these techniques. The special requirements of the hearing prostheses are discussed with respect to the operation of each device, and the choice of the peak picker is found to be felicitous in this application.
Cade, W Todd; Nabar, Sharmila R; Keyser, Randall E
2004-05-01
The purpose of this study was to determine the reproducibility of the indirect Fick method for the measurement of mixed venous carbon dioxide partial pressure (P(v)CO(2)) and venous carbon dioxide content (C(v)CO(2)) for estimation of cardiac output (Q(c)), using the exponential rise method of carbon dioxide rebreathing, during non-steady-state treadmill exercise. Ten healthy participants (eight female and two male) performed three incremental, maximal exercise treadmill tests to exhaustion within 1 week. Non-invasive Q(c) measurements were evaluated at rest, during each 3-min stage, and at peak exercise, across three identical treadmill tests, using the exponential rise technique for measuring mixed venous PCO(2) and CCO(2) and estimating venous-arterio carbon dioxide content difference (C(v-a)CO(2)). Measurements were divided into measured or estimated variables [heart rate (HR), oxygen consumption (VO(2)), volume of expired carbon dioxide (VCO(2)), end-tidal carbon dioxide (P(ET)CO(2)), arterial carbon dioxide partial pressure (P(a)CO(2)), venous carbon dioxide partial pressure ( P(v)CO(2)), and C(v-a)CO(2)] and cardiorespiratory variables derived from the measured variables [Q(c), stroke volume (V(s)), and arteriovenous oxygen difference ( C(a-v)O(2))]. In general, the derived cardiorespiratory variables demonstrated acceptable (R=0.61) to high (R>0.80) reproducibility, especially at higher intensities and peak exercise. Measured variables, excluding P(a)CO(2) and C(v-a)CO(2), also demonstrated acceptable (R=0.6 to 0.79) to high reliability. The current study demonstrated acceptable to high reproducibility of the exponential rise indirect Fick method in measurement of mixed venous PCO(2) and CCO(2) for estimation of Q(c) during incremental treadmill exercise testing, especially at high-intensity and peak exercise.
NASA Technical Reports Server (NTRS)
Vallejo, J.J.; Hejduk, M.D.; Stamey, J. D.
2015-01-01
Satellite conjunction risk typically evaluated through the probability of collision (Pc). Considers both conjunction geometry and uncertainties in both state estimates. Conjunction events initially discovered through Joint Space Operations Center (JSpOC) screenings, usually seven days before Time of Closest Approach (TCA). However, JSpOC continues to track objects and issue conjunction updates. Changes in state estimate and reduced propagation time cause Pc to change as event develops. These changes a combination of potentially predictable development and unpredictable changes in state estimate covariance. Operationally useful datum: the peak Pc. If it can reasonably be inferred that the peak Pc value has passed, then risk assessment can be conducted against this peak value. If this value is below remediation level, then event intensity can be relaxed. Can the peak Pc location be reasonably predicted?
Parameters of triggered-lightning flashes in Florida and Alabama
NASA Astrophysics Data System (ADS)
Fisher, R. J.; Schnetzer, G. H.; Thottappillil, R.; Rakov, V. A.; Uman, M. A.; Goldberg, J. D.
1993-12-01
Channel base currents from triggered lightning were measured at the NASA Kennedy Space Center, Florida, during summer 1990 and at Fort McClellan, Alabama, during summer 1991. Additionally, 16-mm cinematic records with 3- or 5-ms resolution were obtained for all flashes, and streak camera records were obtained for three of the Florida flashes. The 17 flashes analyzed here contained 69 strokes, all lowering negative charge from cloud to ground. Statistics on interstroke interval, no-current interstroke interval, total stroke duration, total stroke charge, total stroke action integral (∫ i2dt), return stroke current wave front characteristics, time to half peak value, and return stroke peak current are presented. Return stroke current pulses, characterized by rise times of the order of a few microseconds or less and peak values in the range of 4 to 38 kA, were found not to occur until after any preceding current at the bottom of the lightning channel fell below the noise level of less than 2 A. Current pulses associated with M components, characterized by slower rise times (typically tens to hundreds of microseconds) and peak values generally smaller than those of the return stroke pulses, occurred during established channel current flow of some tens to some hundreds of amperes. A relatively strong positive correlation was found between return stroke current average rate of rise and current peak. There was essentially no correlation between return stroke current peak and 10-90% rise time or between return stroke peak and the width of the current waveform at half of its peak value. Parameters of the lightning flashes triggered in Florida and Alabama are similar to each other but are different from those of triggered lightning recorded in New Mexico during the 1981 Thunderstorm Research International Program. Continuing currents that follow return stroke current peaks and last for more than 10 ms exhibit a variety of wave shapes that we have subdivided into four categories. All such continuing currents appear to start with a current pulse presumably associated with an M component. A brief summary of lightning parameters important for lightning protection, in a form convenient for practical use, is presented in an appendix.
Methods for estimating magnitude and frequency of peak flows for natural streams in Utah
Kenney, Terry A.; Wilkowske, Chris D.; Wright, Shane J.
2007-01-01
Estimates of the magnitude and frequency of peak streamflows is critical for the safe and cost-effective design of hydraulic structures and stream crossings, and accurate delineation of flood plains. Engineers, planners, resource managers, and scientists need accurate estimates of peak-flow return frequencies for locations on streams with and without streamflow-gaging stations. The 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were estimated for 344 unregulated U.S. Geological Survey streamflow-gaging stations in Utah and nearby in bordering states. These data along with 23 basin and climatic characteristics computed for each station were used to develop regional peak-flow frequency and magnitude regression equations for 7 geohydrologic regions of Utah. These regression equations can be used to estimate the magnitude and frequency of peak flows for natural streams in Utah within the presented range of predictor variables. Uncertainty, presented as the average standard error of prediction, was computed for each developed equation. Equations developed using data from more than 35 gaging stations had standard errors of prediction that ranged from 35 to 108 percent, and errors for equations developed using data from less than 35 gaging stations ranged from 50 to 357 percent.
Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio
Koltun, G.F.; Roberts, J.W.
1990-01-01
Multiple-regression equations are presented for estimating flood-peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years at ungaged sites on rural, unregulated streams in Ohio. The average standard errors of prediction for the equations range from 33.4% to 41.4%. Peak discharge estimates determined by log-Pearson Type III analysis using data collected through the 1987 water year are reported for 275 streamflow-gaging stations. Ordinary least-squares multiple-regression techniques were used to divide the State into three regions and to identify a set of basin characteristics that help explain station-to- station variation in the log-Pearson estimates. Contributing drainage area, main-channel slope, and storage area were identified as suitable explanatory variables. Generalized least-square procedures, which include historical flow data and account for differences in the variance of flows at different gaging stations, spatial correlation among gaging station records, and variable lengths of station record were used to estimate the regression parameters. Weighted peak-discharge estimates computed as a function of the log-Pearson Type III and regression estimates are reported for each station. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site located on the same stream. Limitations and shortcomings cited in an earlier report on the magnitude and frequency of floods in Ohio are addressed in this study. Geographic bias is no longer evident for the Maumee River basin of northwestern Ohio. No bias is found to be associated with the forested-area characteristic for the range used in the regression analysis (0.0 to 99.0%), nor is this characteristic significant in explaining peak discharges. Surface-mined area likewise is not significant in explaining peak discharges, and the regression equations are not biased when applied to basins having approximately 30% or less surface-mined area. Analyses of residuals indicate that the equations tend to overestimate flood-peak discharges for basins having approximately 30% or more surface-mined area. (USGS)
Current trends in Natural Gas Flaring Observed from Space with VIIRS
NASA Astrophysics Data System (ADS)
Zhizhin, M. N.; Elvidge, C.; Baugh, K.
2017-12-01
The five-year survey of natural gas flaring in 2012-2016 has been completed with nighttime Visible Infrared Imaging Radiometer Suite (VIIRS) data. The survey identifies flaring site locations, annual duty cycle, and provides an estimate of the flared gas volumes in methane equivalents. VIIRS is particularly well-.suited for detecting and measuring the radiant emissions from gas flares through the collection of shortwave and near-infrared data at night, recording the peak radiant emissions from flares. The total flared gas volume is estimated at 140 +/-30 billion cubic meters (BCM) per year, corresponding to 3.5% of global natural gas production. While Russia leads in terms of flared gas volume (>20 BCM), the U.S. has the largest number of flares (8,199 of 19,057 worldwide). The two countries have opposite trends in flaring: while for the U.S. the peak was reached in 2015, for Russia it was the minimum. On the regional scale in the U.S., Texas has the maximum number of flares (3749), with North Dakota, the second highest, having one half of this number (2,003). The number of flares for most of the states has decreased in the last 3 years following the trend in oil prices. The presentation will compare the global estimates, and regional trends observed in the U.S. regions. Preliminary estimates for global gas flaring in 2017 will be presented
Jennings, M.E.; Thomas, W.O.; Riggs, H.C.
1994-01-01
For many years, the U.S. Geological Survey (USGS) has been involved in the development of regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally these equations have been developed on a statewide or metropolitan area basis as part of cooperative study programs with specific State Departments of Transportation or specific cities. The USGS, in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency, has compiled all the current (as of September 1993) statewide and metropolitan area regression equations into a micro-computer program titled the National Flood Frequency Program.This program includes regression equations for estimating flood-peak discharges and techniques for estimating a typical flood hydrograph for a given recurrence interval peak discharge for unregulated rural and urban watersheds. These techniques should be useful to engineers and hydrologists for planning and design applications. This report summarizes the statewide regression equations for rural watersheds in each State, summarizes the applicable metropolitan area or statewide regression equations for urban watersheds, describes the National Flood Frequency Program for making these computations, and provides much of the reference information on the extrapolation variables needed to run the program.
User's Manual for Program PeakFQ, Annual Flood-Frequency Analysis Using Bulletin 17B Guidelines
Flynn, Kathleen M.; Kirby, William H.; Hummel, Paul R.
2006-01-01
Estimates of flood flows having given recurrence intervals or probabilities of exceedance are needed for design of hydraulic structures and floodplain management. Program PeakFQ provides estimates of instantaneous annual-maximum peak flows having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (annual-exceedance probabilities of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002, respectively). As implemented in program PeakFQ, the Pearson Type III frequency distribution is fit to the logarithms of instantaneous annual peak flows following Bulletin 17B guidelines of the Interagency Advisory Committee on Water Data. The parameters of the Pearson Type III frequency curve are estimated by the logarithmic sample moments (mean, standard deviation, and coefficient of skewness), with adjustments for low outliers, high outliers, historic peaks, and generalized skew. This documentation provides an overview of the computational procedures in program PeakFQ, provides a description of the program menus, and provides an example of the output from the program.
Bisese, James A.
1995-01-01
Methods are presented for estimating the peak discharges of rural, unregulated streams in Virginia. A Pearson Type III distribution is fitted to the logarithms of the unregulated annual peak-discharge records from 363 stream-gaging stations in Virginia to estimate the peak discharge at these stations for recurrence intervals of 2 to 500 years. Peak-discharge characteristics for 284 unregulated stations are divided into eight regions based on physiographic province, and regressed on basin characteristics, including drainage area, main channel length, main channel slope, mean basin elevation, percentage of forest cover, mean annual precipitation, and maximum rainfall intensity. Regression equations for each region are computed by use of the generalized least-squares method, which accounts for spatial and temporal correlation between nearby gaging stations. This regression technique weights the significance of each station to the regional equation based on the length of records collected at each cation, the correlation between annual peak discharges among the stations, and the standard deviation of the annual peak discharge for each station.Drainage area proved to be the only significant explanatory variable in four regions, while other regions have as many as three significant variables. Standard errors of the regression equations range from 30 to 80 percent. Alternate equations using drainage area only are provided for the five regions with more than one significant explanatory variable.Methods and sample computations are provided to estimate peak discharges at gaged and engaged sites in Virginia for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, and to adjust the regression estimates for sites on gaged streams where nearby gaging-station records are available.
Parrett, Charles; Omang, R.J.; Hull, J.A.
1983-01-01
Equations for estimating mean annual runoff and peak discharge from measurements of channel geometry were developed for western and northeastern Montana. The study area was divided into two regions for the mean annual runoff analysis, and separate multiple-regression equations were developed for each region. The active-channel width was determined to be the most important independent variable in each region. The standard error of estimate for the estimating equation using active-channel width was 61 percent in the Northeast Region and 38 percent in the West region. The study area was divided into six regions for the peak discharge analysis, and multiple regression equations relating channel geometry and basin characteristics to peak discharges having recurrence intervals of 2, 5, 10, 25, 50 and 100 years were developed for each region. The standard errors of estimate for the regression equations using only channel width as an independent variable ranged from 35 to 105 percent. The standard errors improved in four regions as basin characteristics were added to the estimating equations. (USGS)
Testing and Analysis of NEXT Ion Engine Discharge Cathode Assembly Wear
NASA Technical Reports Server (NTRS)
Domonkos, Matthew T.; Foster, John E.; Soulas, George C.; Nakles, Michael
2003-01-01
Experimental and analytical investigations were conducted to predict the wear of the discharge cathode keeper in the NASA Evolutionary Xenon Thruster. The ion current to the keeper was found to be highly dependent upon the beam current, and the average beam current density was nearly identical to that of the NSTAR thruster for comparable beam current density. The ion current distribution was highly peaked toward the keeper orifice. A deterministic wear assessment predicted keeper orifice erosion to the same diameter as the cathode tube after processing 375 kg of xenon. A rough estimate of discharge cathode assembly life limit due to sputtering indicated that the current design exceeds the qualification goal of 405 kg. Probabilistic wear analysis showed that the plasma potential and the sputter yield contributed most to the uncertainty in the wear assessment. It was recommended that fundamental experimental and modeling efforts focus on accurately describing the plasma potential and the sputtering yield.
Lightning charge moment changes estimated by high speed photometric observations from ISS
NASA Astrophysics Data System (ADS)
Hobara, Y.; Kono, S.; Suzuki, K.; Sato, M.; Takahashi, Y.; Adachi, T.; Ushio, T.; Suzuki, M.
2017-12-01
Optical observations by the CCD camera using the orbiting satellite is generally used to derive the spatio-temporal global distributions of the CGs and ICs. However electrical properties of the lightning such as peak current and lightning charge are difficult to obtain from the space. In particular, CGs with considerably large lightning charge moment changes (CMC) and peak currents are crucial parameters to generate red sprites and elves, respectively, and so it must be useful to obtain these parameters from space. In this paper, we obtained the lightning optical signatures by using high speed photometric observations from the International Space Station GLIMS (Global Lightning and Sprit MeasurementS JEM-EF) mission. These optical signatures were compared quantitatively with radio signatures recognized as truth values derived from ELF electromagnetic wave observations on the ground to verify the accuracy of the optically derived values. High correlation (R > 0.9) was obtained between lightning optical irradiance and current moment, and quantitative relational expression between these two parameters was derived. Rather high correlation (R > 0.7) was also obtained between the integrated irradiance and the lightning CMC. Our results indicate the possibility to derive lightning electrical properties (current moment and CMC) from optical measurement from space. Moreover, we hope that these results will also contribute to forthcoming French microsatellite mission TARANIS.
1988-10-26
concentrated into this off- axis peak is then considered. Estimates of the source brightness ( extraction ion diode source current density divided by the square...radioactive contamination of the accelerator. One possible scheme for avoiding this problem is to use extraction geometry ion diodes to focus the ion beams...annular region. These results will be coupled to two simple models of extraction ion diodes to determihe the ion source brightness requirements. These
Tompuri, Tuomo; Lintu, Niina; Laitinen, Tomi; Lakka, Timo A
2017-08-09
Exercise testing by cycle ergometer allows to observe the interaction between oxygen uptake (VO 2 ) and workload (W), and VO 2 /W-slope can be used as a diagnostic tool. Respectively, peak oxygen uptake (VO 2 PEAK ) can be estimated by maximal workload. We aim to determine reference for VO 2 /W-slope among prepubertal children and define agreement between estimated and measured VO 2 PEAK . A total of 38 prepubertal children (20 girls) performed a maximal cycle ergometer test with respiratory gas analysis. VO 2 /W-slopes were computed using linear regression. Agreement analysis by Bland and Altman for estimated and measured VO 2 PEAK was carried out including limits of agreement (LA). Determinants for VO 2 /W-slopes and estimation bias were defined. VO2/W-slope was in both girls and boys ≥9·4 and did not change with exercise level, but the oxygen cost of exercise was higher among physically more active children. Estimated VO 2 PEAK had 6·4% coefficient of variation, and LA varied from 13% underestimation to 13% overestimation. Bias had a trend towards underestimation along lean mass proportional VO 2 PEAK . The primary determinant for estimation bias was VO2/W-slope (β = -0·65; P<0·001). The reference values for VO 2 /W-slope among healthy prepubertal children were similar to those published for adults and among adolescents. Estimated and measured VO 2 PEAK should not be considered to be interchangeable because of the variation in the relationship between VO 2 and W. On other hand, variation in the relationship between VO 2 and W enables that VO 2 /W-slope can be used as a diagnostic tool. © 2017 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Estimating the Impacts of Direct Load Control Programs Using GridPIQ, a Web-Based Screening Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Seemita; Thayer, Brandon L.; Barrett, Emily L.
In direct load control (DLC) programs, utilities can curtail the demand of participating loads to contractually agreed-upon levels during periods of critical peak load, thereby reducing stress on the system, generation cost, and required transmission and generation capacity. Participating customers receive financial incentives. The impacts of implementing DLC programs extend well beyond peak shaving. There may be a shift of load proportional to the interrupted load to the times before or after a DLC event, and different load shifts have different consequences. Tools that can quantify the impacts of such programs on load curves, peak demand, emissions, and fossil fuelmore » costs are currently lacking. The Grid Project Impact Quantification (GridPIQ) screening tool includes a Direct Load Control module, which takes into account project-specific inputs as well as the larger system context in order to quantify the impacts of a given DLC program. This allows users (utilities, researchers, etc.) to test and compare different program specifications and their impacts.« less
Modeling the Geologic History of Mt. Sharp
NASA Technical Reports Server (NTRS)
Pascuzzo, A.; Allen, C.
2015-01-01
Gale is an approximately 155 km diameter crater located on the martian dichotomy boundary (5 deg S 138 deg E). Gale is estimated to have formed 3.8 - 3.5 Gya, in the late Noachian or early Hesperian. Mt. Sharp, at the center of Gale Crater, is a crescent shaped sedimentary mound that rises 5.2 km above the crater floor. Gale is one of the few craters that has a peak reaching higher than the rim of the crater wall. The Curiosity rover is currently fighting to find its way across a dune field at the northwest base of the mound searching for evidence of habitability. This study used orbital images and topographic data to refine models for the geologic history of Mt. Sharp by analyzing its morphological features. In addition, it assessed the possibility of a peak ring in Gale. The presence of a peak ring can offer important information to how Mt. Sharp was formed and eroded early in Gale's history.
A mechanistic diagnosis of the simulation of soil CO2 efflux of the ACME Land Model
NASA Astrophysics Data System (ADS)
Liang, J.; Ricciuto, D. M.; Wang, G.; Gu, L.; Hanson, P. J.; Mayes, M. A.
2017-12-01
Accurate simulation of the CO2 efflux from soils (i.e., soil respiration) to the atmosphere is critical to project global biogeochemical cycles and the magnitude of climate change in Earth system models (ESMs). Currently, the simulated soil respiration by ESMs still have a large uncertainty. In this study, a mechanistic diagnosis of soil respiration in the Accelerated Climate Model for Energy (ACME) Land Model (ALM) was conducted using long-term observations at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the central U.S. The results showed that the ALM default run significantly underestimated annual soil respiration and gross primary production (GPP), while incorrectly estimating soil water potential. Improved simulations of soil water potential with site-specific data significantly improved the modeled annual soil respiration, primarily because annual GPP was simultaneously improved. Therefore, accurate simulations of soil water potential must be carefully calibrated in ESMs. Despite improved annual soil respiration, the ALM continued to underestimate soil respiration during peak growing seasons, and to overestimate soil respiration during non-peak growing seasons. Simulations involving increased GPP during peak growing seasons increased soil respiration, while neither improved plant phenology nor increased temperature sensitivity affected the simulation of soil respiration during non-peak growing seasons. One potential reason for the overestimation of the soil respiration during non-peak growing seasons may be that the current model structure is substrate-limited, while microbial dormancy under stress may cause the system to become decomposer-limited. Further studies with more microbial data are required to provide adequate representation of soil respiration and to understand the underlying reasons for inaccurate model simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, D. Y., E-mail: cdy7659@126.com; Nanjing University of posts and Telecommunications, Nanjing 210046; Sun, Y.
We have investigated carrier transport in SiO{sub 2}/nc-Si/SiO{sub 2} multi-layers by room temperature current-voltage measurements. Resonant tunneling signatures accompanied by current peaks are observed. Carrier transport in the multi-layers were analyzed by plots of ln(I/V{sup 2}) as a function of 1/V and ln(I) as a function of V{sup 1/2}. Results suggest that besides films quality, nc-Si and barrier sub-layer thicknesses are important parameters that restrict carrier transport. When thicknesses are both small, direct tunneling dominates carrier transport, resonant tunneling occurs only at certain voltages and multi-resonant tunneling related current peaks can be observed but with peak to valley current ratiomore » (PVCR) values smaller than 1.5. When barrier thickness is increased, trap-related and even high field related tunneling is excited, causing that multi-current peaks cannot be observed clearly, only one current peak with higher PVCR value of 7.7 can be observed. While if the thickness of nc-Si is large enough, quantum confinement is not so strong, a broad current peak with PVCR value as high as 60 can be measured, which may be due to small energy difference between the splitting energy levels in the quantum dots of nc-Si. Size distribution in a wide range may cause un-controllability of the peak voltages.« less
Climate Change Impacts on Peak Electricity Consumption: US vs. Europe.
NASA Astrophysics Data System (ADS)
Auffhammer, M.
2016-12-01
It has been suggested that climate change impacts on the electric sector will account for the majority of global economic damages by the end of the current century and beyond. This finding is at odds with the relatively modest increase in climate driven impacts on consumption. Comprehensive high frequency load balancing authority level data have not been used previously to parameterize the relationship between electric demand and temperature for any major economy. Using statistical models we analyze multi-year data from load balancing authorities in the United States of America and the European Union, which are responsible for more than 90% of the electricity delivered to residential, industrial, commercial and agricultural customers. We couple the estimated response functions between total daily consumption and daily peak load with an ensemble of downscaled GCMs from the CMIP5 archive to simulate climate change driven impacts on both outcomes. We show moderate and highly spatially heterogeneous changes in consumption. The results of our peak load simulations, however, suggest significant changes in the intensity and frequency of peak events throughout the United States and Europe. As the electricity grid is built to endure maximum load, which usually occurs on the hottest day of the year, our findings have significant implications for the construction of costly peak generating and transmission capacity.
Gao, Yuqin; Yuan, Yu; Wang, Huaizhi; Schmidt, Arthur R; Wang, Kexuan; Ye, Liu
2017-05-01
The urban agglomeration polders type of flood control pattern is a general flood control pattern in the eastern plain area and some of the secondary river basins in China. A HEC-HMS model of Qinhuai River basin based on the flood control pattern was established for simulating basin runoff, examining the impact of urban agglomeration polders on flood events, and estimating the effects of urbanization on hydrological processes of the urban agglomeration polders in Qinhuai River basin. The results indicate that the urban agglomeration polders could increase the peak flow and flood volume. The smaller the scale of the flood, the more significant the influence of the polder was to the flood volume. The distribution of the city circle polder has no obvious impact on the flood volume, but has effect on the peak flow. The closer the polder is to basin output, the smaller the influence it has on peak flows. As the level of urbanization gradually improving of city circle polder, flood volumes and peak flows gradually increase compared to those with the current level of urbanization (the impervious rate was 20%). The potential change in flood volume and peak flow with increasing impervious rate shows a linear relationship.
Oota, Shinichi; Hatae, Yuta; Amada, Kei; Koya, Hidekazu; Kawakami, Mitsuyasu
2010-09-15
Although microbial biochemical oxygen demand (BOD) sensors utilizing redox mediators have attracted much attention as a rapid BOD measurement method, little attempts have been made to apply the mediated BOD biosensors to the flow injection analysis system. In this work, a mediated BOD sensor system of flow injection mode, constructed by combining an immobilized microbial reactor with an electrochemical flow cell of three electrodes configuration, has been developed to estimate BOD of shochu distillery wastewater (SDW). It was demonstrated consequently that the mediated sensing was realized by employing phosphate buffer containing potassium hexacyanoferrate as the carrier. The output current was found to yield a peak with a sample injection, and to result from reoxidation of reduced mediator at the electrode. By employing the peak area as the sensor response, the effects of flow rate and pH of the carrier on the sensitivity were investigated. The sensor system using a microorganism of high SDW-assimilation capacity showed good performance and proved to be available for estimation of BOD of SDW. Copyright 2010 Elsevier B.V. All rights reserved.
Peak Measurement for Vancomycin AUC Estimation in Obese Adults Improves Precision and Lowers Bias.
Pai, Manjunath P; Hong, Joseph; Krop, Lynne
2017-04-01
Vancomycin area under the curve (AUC) estimates may be skewed in obese adults due to weight-dependent pharmacokinetic parameters. We demonstrate that peak and trough measurements reduce bias and improve the precision of vancomycin AUC estimates in obese adults ( n = 75) and validate this in an independent cohort ( n = 31). The precision and mean percent bias of Bayesian vancomycin AUC estimates are comparable between covariate-dependent ( R 2 = 0.774, 3.55%) and covariate-independent ( R 2 = 0.804, 3.28%) models when peaks and troughs are measured but not when measurements are restricted to troughs only ( R 2 = 0.557, 15.5%). Copyright © 2017 American Society for Microbiology.
Sando, Steven K.; Sando, Roy; McCarthy, Peter M.; Dutton, DeAnn M.
2016-04-05
The climatic conditions of the specific time period during which peak-flow data were collected at a given streamflow-gaging station (hereinafter referred to as gaging station) can substantially affect how well the peak-flow frequency (hereinafter referred to as frequency) results represent long-term hydrologic conditions. Differences in the timing of the periods of record can result in substantial inconsistencies in frequency estimates for hydrologically similar gaging stations. Potential for inconsistency increases with decreasing peak-flow record length. The representativeness of the frequency estimates for a short-term gaging station can be adjusted by various methods including weighting the at-site results in association with frequency estimates from regional regression equations (RREs) by using the Weighted Independent Estimates (WIE) program. Also, for gaging stations that cannot be adjusted by using the WIE program because of regulation or drainage areas too large for application of RREs, frequency estimates might be improved by using record extension procedures, including a mixed-station analysis using the maintenance of variance type I (MOVE.1) procedure. The U.S. Geological Survey, in cooperation with the Montana Department of Transportation and the Montana Department of Natural Resources and Conservation, completed a study to provide adjusted frequency estimates for selected gaging stations through water year 2011.The purpose of Chapter D of this Scientific Investigations Report is to present adjusted frequency estimates for 504 selected streamflow-gaging stations in or near Montana based on data through water year 2011. Estimates of peak-flow magnitudes for the 66.7-, 50-, 42.9-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities are reported. These annual exceedance probabilities correspond to the 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.The at-site frequency estimates were adjusted by weighting with frequency estimates from RREs using the WIE program for 438 selected gaging stations in Montana. These 438 selected gaging stations (1) had periods of record less than or equal to 40 years, (2) represented unregulated or minor regulation conditions, and (3) had drainage areas less than about 2,750 square miles.The weighted-average frequency estimates obtained by weighting with RREs generally are considered to provide improved frequency estimates. In some cases, there are substantial differences among the at-site frequency estimates, the regression-equation frequency estimates, and the weighted-average frequency estimates. In these cases, thoughtful consideration should be applied when selecting the appropriate frequency estimate. Some factors that might be considered when selecting the appropriate frequency estimate include (1) whether the specific gaging station has peak-flow characteristics that distinguish it from most other gaging stations used in developing the RREs for the hydrologic region; and (2) the length of the peak-flow record and the general climatic characteristics during the period when the peak-flow data were collected. For critical structure-design applications, a conservative approach would be to select the higher of the at-site frequency estimate and the weighted-average frequency estimate.The mixed-station MOVE.1 procedure generally was applied in cases where three or more gaging stations were located on the same large river and some of the gaging stations could not be adjusted using the weighted-average method because of regulation or drainage areas too large for application of RREs. The mixed-station MOVE.1 procedure was applied to 66 selected gaging stations on 19 large rivers.The general approach for using mixed-station record extension procedures to adjust at-site frequencies involved (1) determining appropriate base periods for the gaging stations on the large rivers, (2) synthesizing peak-flow data for the gaging stations with incomplete peak-flow records during the base periods by using the mixed-station MOVE.1 procedure, and (3) conducting frequency analysis on the combined recorded and synthesized peak-flow data for each gaging station. Frequency estimates for the combined recorded and synthesized datasets for 66 gaging stations with incomplete peak-flow records during the base periods are presented. The uncertainties in the mixed-station record extension results are difficult to directly quantify; thus, it is important to understand the intended use of the estimated frequencies based on analysis of the combined recorded and synthesized datasets. The estimated frequencies are considered general estimates of frequency relations among gaging stations on the same stream channel that might be expected if the gaging stations had been gaged during the same long-term base period. However, because the mixed-station record extension procedures involve secondary statistical analysis with accompanying errors, the uncertainty of the frequency estimates is larger than would be obtained by collecting systematic records for the same number of years in the base period.
Auffhammer, Maximilian; Baylis, Patrick; Hausman, Catherine H
2017-02-21
It has been suggested that climate change impacts on the electric sector will account for the majority of global economic damages by the end of the current century and beyond [Rose S, et al. (2014) Understanding the Social Cost of Carbon: A Technical Assessment ]. The empirical literature has shown significant increases in climate-driven impacts on overall consumption, yet has not focused on the cost implications of the increased intensity and frequency of extreme events driving peak demand, which is the highest load observed in a period. We use comprehensive, high-frequency data at the level of load balancing authorities to parameterize the relationship between average or peak electricity demand and temperature for a major economy. Using statistical models, we analyze multiyear data from 166 load balancing authorities in the United States. We couple the estimated temperature response functions for total daily consumption and daily peak load with 18 downscaled global climate models (GCMs) to simulate climate change-driven impacts on both outcomes. We show moderate and heterogeneous changes in consumption, with an average increase of 2.8% by end of century. The results of our peak load simulations, however, suggest significant increases in the intensity and frequency of peak events throughout the United States, assuming today's technology and electricity market fundamentals. As the electricity grid is built to endure maximum load, our findings have significant implications for the construction of costly peak generating capacity, suggesting additional peak capacity costs of up to 180 billion dollars by the end of the century under business-as-usual.
Electromagnetic pulse-induced current measurement device
NASA Astrophysics Data System (ADS)
Gandhi, Om P.; Chen, Jin Y.
1991-08-01
To develop safety guidelines for exposure to high fields associated with an electromagnetic pulse (EMP), it is necessary to devise techniques that would measure the peak current induced in the human body. The main focus of this project was to design, fabricate, and test a portable, self-contained stand-on device that would measure and hold the peak current and the integrated change Q. The design specifications of the EMP-Induced Current Measurement Device are as follows: rise time of the current pulse, 5 ns; peak current, 20-600 A; charge Q, 0-20 microcoulombs. The device uses a stand-on parallel-plate bilayer sensor and fast high-frequency circuit that are well-shielded against spurious responses to high incident fields. Since the polarity of the incident peak electric field of the EMP may be either positive or negative, the induced peak current can also be positive or negative. Therefore, the device is designed to respond to either of these polarities and measure and hold both the peak current and the integrated charge which are simultaneously displayed on two separate 3-1/2 digit displays. The prototype device has been preliminarily tested with the EMP's generated at the Air Force Weapons Laboratory (ALECS facility) at Kirtland AFB, New Mexico.
NASA Astrophysics Data System (ADS)
De Niel, J.; Demarée, G.; Willems, P.
2017-10-01
Governments, policy makers, and water managers are pushed by recent socioeconomic developments such as population growth and increased urbanization inclusive of occupation of floodplains to impose very stringent regulations on the design of hydrological structures. These structures need to withstand storms with return periods typically ranging between 1,250 and 10,000 years. Such quantification involves extrapolations of systematically measured instrumental data, possibly complemented by quantitative and/or qualitative historical data and paleoflood data. The accuracy of the extrapolations is, however, highly unclear in practice. In order to evaluate extreme river peak flow extrapolation and accuracy, we studied historical and instrumental data of the past 500 years along the Meuse River. We moreover propose an alternative method for the estimation of the extreme value distribution of river peak flows, based on weather types derived by sea level pressure reconstructions. This approach results in a more accurate estimation of the tail of the distribution, where current methods are underestimating the design levels related to extreme high return periods. The design flood for a 1,250 year return period is estimated at 4,800 m3 s-1 for the proposed method, compared with 3,450 and 3,900 m3 s-1 for a traditional method and a previous study.
Are cooler surfaces a cost-effect mitigation of urban heat islands?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pomerantz, Melvin
Much research has gone into technologies to mitigate urban heat islands by making urban surfaces cooler by increasing their albedos. To be practical, the benefit of the technology must be greater than its cost. Here, this report provides simple methods for quantifying the maxima of some benefits that albedo increases may provide. The method used is an extension of an earlier paper that estimated the maximum possible electrical energy saving achievable in an entire city in a year by a change of albedo of its surfaces. The present report estimates the maximum amounts and monetary savings of avoided CO 2more » emissions and the decreases in peak power demands. As examples, for several warm cities in California, a 0.2 increase in albedo of pavements is found to reduce CO 2 emissions by < 1 kg per m 2 per year. At the current price of CO 2 reduction in California, the monetary saving is < US$ 0.01 per year per m 2 modified. The resulting maximum peak-power reductions are estimated to be < 7% of the base power of the city. In conclusion, the magnitudes of the savings are such that decision-makers should choose carefully which urban heat island mitigation techniques are cost effective.« less
Are cooler surfaces a cost-effect mitigation of urban heat islands?
Pomerantz, Melvin
2017-04-20
Much research has gone into technologies to mitigate urban heat islands by making urban surfaces cooler by increasing their albedos. To be practical, the benefit of the technology must be greater than its cost. Here, this report provides simple methods for quantifying the maxima of some benefits that albedo increases may provide. The method used is an extension of an earlier paper that estimated the maximum possible electrical energy saving achievable in an entire city in a year by a change of albedo of its surfaces. The present report estimates the maximum amounts and monetary savings of avoided CO 2more » emissions and the decreases in peak power demands. As examples, for several warm cities in California, a 0.2 increase in albedo of pavements is found to reduce CO 2 emissions by < 1 kg per m 2 per year. At the current price of CO 2 reduction in California, the monetary saving is < US$ 0.01 per year per m 2 modified. The resulting maximum peak-power reductions are estimated to be < 7% of the base power of the city. In conclusion, the magnitudes of the savings are such that decision-makers should choose carefully which urban heat island mitigation techniques are cost effective.« less
An early solar dynamo prediction: Cycle 23 is approximately cycle 22
NASA Technical Reports Server (NTRS)
Schatten, Kenneth H.; Pesnell, W. Dean
1993-01-01
In this paper, we briefly review the 'dynamo' and 'geomagnetic precursor' methods of long-term solar activity forecasting. These methods depend upon the most basic aspect of dynamo theory to predict future activity, future magnetic field arises directly from the magnification of pre-existing magnetic field. We then generalize the dynamo technique, allowing the method to be used at any phase of the solar cycle, through the development of the 'Solar Dynamo Amplitude' (SODA) index. This index is sensitive to the magnetic flux trapped within the Sun's convection zone but insensitive to the phase of the solar cycle. Since magnetic fields inside the Sun can become buoyant, one may think of the acronym SODA as describing the amount of buoyant flux. Using the present value of the SODA index, we estimate that the next cycle's smoothed peak activity will be about 210 +/- 30 solar flux units for the 10.7 cm radio flux and a sunspot number of 170 +/- 25. This suggests that solar cycle #23 will be large, comparable to cycle #22. The estimated peak is expected to occur near 1999.7 +/- 1 year. Since the current approach is novel (using data prior to solar minimum), these estimates may improve when the upcoming solar minimum is reached.
Sando, Roy; Sando, Steven K.; McCarthy, Peter M.; Dutton, DeAnn M.
2016-04-05
The U.S. Geological Survey (USGS), in cooperation with the Montana Department of Natural Resources and Conservation, completed a study to update methods for estimating peak-flow frequencies at ungaged sites in Montana based on peak-flow data at streamflow-gaging stations through water year 2011. The methods allow estimation of peak-flow frequencies (that is, peak-flow magnitudes, in cubic feet per second, associated with annual exceedance probabilities of 66.7, 50, 42.9, 20, 10, 4, 2, 1, 0.5, and 0.2 percent) at ungaged sites. The annual exceedance probabilities correspond to 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Regional regression analysis is a primary focus of Chapter F of this Scientific Investigations Report, and regression equations for estimating peak-flow frequencies at ungaged sites in eight hydrologic regions in Montana are presented. The regression equations are based on analysis of peak-flow frequencies and basin characteristics at 537 streamflow-gaging stations in or near Montana and were developed using generalized least squares regression or weighted least squares regression.All of the data used in calculating basin characteristics that were included as explanatory variables in the regression equations were developed for and are available through the USGS StreamStats application (http://water.usgs.gov/osw/streamstats/) for Montana. StreamStats is a Web-based geographic information system application that was created by the USGS to provide users with access to an assortment of analytical tools that are useful for water-resource planning and management. The primary purpose of the Montana StreamStats application is to provide estimates of basin characteristics and streamflow characteristics for user-selected ungaged sites on Montana streams. The regional regression equations presented in this report chapter can be conveniently solved using the Montana StreamStats application.Selected results from this study were compared with results of previous studies. For most hydrologic regions, the regression equations reported for this study had lower mean standard errors of prediction (in percent) than the previously reported regression equations for Montana. The equations presented for this study are considered to be an improvement on the previously reported equations primarily because this study (1) included 13 more years of peak-flow data; (2) included 35 more streamflow-gaging stations than previous studies; (3) used a detailed geographic information system (GIS)-based definition of the regulation status of streamflow-gaging stations, which allowed better determination of the unregulated peak-flow records that are appropriate for use in the regional regression analysis; (4) included advancements in GIS and remote-sensing technologies, which allowed more convenient calculation of basin characteristics and investigation of many more candidate basin characteristics; and (5) included advancements in computational and analytical methods, which allowed more thorough and consistent data analysis.This report chapter also presents other methods for estimating peak-flow frequencies at ungaged sites. Two methods for estimating peak-flow frequencies at ungaged sites located on the same streams as streamflow-gaging stations are described. Additionally, envelope curves relating maximum recorded annual peak flows to contributing drainage area for each of the eight hydrologic regions in Montana are presented and compared to a national envelope curve. In addition to providing general information on characteristics of large peak flows, the regional envelope curves can be used to assess the reasonableness of peak-flow frequency estimates determined using the regression equations.
Optical monitoring of ion beam Y-Ba-Cu-O sputtering
NASA Astrophysics Data System (ADS)
Klein, J. D.; Yen, A.
1990-11-01
The emission spectra resulting from ion beam sputtering a Y-Ba-Cu-O target were observed as a function of beam voltage and beam current. The spectra were relatively clean with several peaks readily attributed to each of Y, Ba, and Ar. Monitoring of copper and oxygen was more difficult with a single CuO peak and one O peak evident. The intensities of the cation peaks were linear with respect to beam voltage above 400 V. Since target current was found not to be directly proportional to beam current, target power was defined as the product of beam voltage and target current. The response of cation peak height to changes in target power was linear and similar for variations of either beam voltage or target current.
Lawlor, Sean M.
2004-01-01
Stream-restoration projects using natural stream designs typically are based on channel configurations that can accommodate a wide range of streamflow and sediment-transport conditions without excessive erosion or deposition. Bankfull discharge is an index of streamflow considered to be closely related to channel shape, size, and slope (channel morphology). Because of the need for more information about the relation between channel morphology and bankfull discharge, the U.S. Geological Survey (USGS), in cooperation with the Montana Department of Transportation and the U.S. Department of Agriculture-Lolo National Forest, conducted a study to collect channel-morphology and bankfull-discharge data at gaged sites and use these data to improve current (2004) methods of estimation of bankfull discharge and various design-peak discharges at ungaged sites. This report presents channel-morphology characteristics, bankfull discharge, and various design-peak discharges for 41 sites in western Montana. Channel shape, size, and slope and bankfull discharge were determined at 41 active or discontinued USGS streamflow-gaging sites in western Montana. The recurrence interval for the bankfull discharge for this study ranged from 1.0 to 4.4 years with a median value of 1.5 years. The relations between channel-morphology characteristics and various design-peak discharges were examined using regression analysis. The analyses showed that the only characteristics that were significant for all peak discharges were either bankfull width or bankfull cross-sectional area. Bankfull discharge at ungaged sites in most of the study area can be estimated by application of a multiplier after determining the 2-year peak discharge at the ungaged site. The multiplier, which is the ratio of bankfull discharge to the 2-year peak discharge determined at the 41 sites, ranged from 0.21 to 3.7 with a median value of 0.84. Regression relations between bankfull discharge and drainage area and between bankfull width and drainage area were examined for three ranges of mean annual precipitation. The results of the regression analyses indicated that both drainage area and mean annual precipitation were significantly related (p values less than 0.05) to bankfull discharge.
Nonparametric Model of Smooth Muscle Force Production During Electrical Stimulation.
Cole, Marc; Eikenberry, Steffen; Kato, Takahide; Sandler, Roman A; Yamashiro, Stanley M; Marmarelis, Vasilis Z
2017-03-01
A nonparametric model of smooth muscle tension response to electrical stimulation was estimated using the Laguerre expansion technique of nonlinear system kernel estimation. The experimental data consisted of force responses of smooth muscle to energy-matched alternating single pulse and burst current stimuli. The burst stimuli led to at least a 10-fold increase in peak force in smooth muscle from Mytilus edulis, despite the constant energy constraint. A linear model did not fit the data. However, a second-order model fit the data accurately, so the higher-order models were not required to fit the data. Results showed that smooth muscle force response is not linearly related to the stimulation power.
Wave transport in the South Australian Basin
NASA Astrophysics Data System (ADS)
Bye, John A. T.; James, Charles
2018-02-01
The specification of the dynamics of the air-sea boundary layer is of fundamental importance to oceanography. There is a voluminous literature on the subject, however a strong link between the velocity profile due to waves and that due to turbulent processes in the wave boundary layer does not appear to have been established. Here we specify the velocity profile due to the wave field using the Toba spectrum, and the velocity profile due to turbulence at the sea surface by the net effect of slip and wave breaking in which slip is the dominant process. Under this specification, the inertial coupling of the two fluids for a constant viscosity Ekman layer yields two independent estimates for the frictional parameter (which is a function of the 10 m drag coefficient and the peak wave period) of the coupled system, one of which is due to the surface Ekman current and the other to the peak wave period. We show that the median values of these two estimates, evaluated from a ROMS simulation over the period 2011-2012 at a station on the Southern Shelf in the South Australian Basin, are similar in strong support of the air-sea boundary layer model. On integrating over the planetary boundary layer we obtain the Ekman transport (w*2/f) and the wave transport due to a truncated Toba spectrum (w*zB/κ) where w* is the friction velocity in water, f is the Coriolis parameter, κ is von Karman's constant and zB = g T2/8 π2 is the depth of wave influence in which g is the acceleration of gravity and T is the peak wave period. A comparison of daily estimates shows that the wave transports from the truncated Toba spectrum and from the SWAN spectral model are highly correlated (r = 0.82) and that on average the Toba estimates are about 86% of the SWAN estimates due to the omission of low frequency tails of the spectra, although for wave transports less than about 0.5 m2 s-1 the estimates are almost equal. In the South Australian Basin the Toba wave transport is on average about 42% of the Ekman transport.
NASA Technical Reports Server (NTRS)
Armstrong, G. P.; Carlier, S. G.; Fukamachi, K.; Thomas, J. D.; Marwick, T. H.
1999-01-01
OBJECTIVES: To validate a simplified estimate of peak power (SPP) against true (invasively measured) peak instantaneous power (TPP), to assess the feasibility of measuring SPP during exercise and to correlate this with functional capacity. DESIGN: Development of a simplified method of measurement and observational study. SETTING: Tertiary referral centre for cardiothoracic disease. SUBJECTS: For validation of SPP with TPP, seven normal dogs and four dogs with dilated cardiomyopathy were studied. To assess feasibility and clinical significance in humans, 40 subjects were studied (26 patients; 14 normal controls). METHODS: In the animal validation study, TPP was derived from ascending aortic pressure and flow probe, and from Doppler measurements of flow. SPP, calculated using the different flow measures, was compared with peak instantaneous power under different loading conditions. For the assessment in humans, SPP was measured at rest and during maximum exercise. Peak aortic flow was measured with transthoracic continuous wave Doppler, and systolic and diastolic blood pressures were derived from brachial sphygmomanometry. The difference between exercise and rest simplified peak power (Delta SPP) was compared with maximum oxygen uptake (VO(2)max), measured from expired gas analysis. RESULTS: SPP estimates using peak flow measures correlated well with true peak instantaneous power (r = 0.89 to 0.97), despite marked changes in systemic pressure and flow induced by manipulation of loading conditions. In the human study, VO(2)max correlated with Delta SPP (r = 0.78) better than Delta ejection fraction (r = 0.18) and Delta rate-pressure product (r = 0.59). CONCLUSIONS: The simple product of mean arterial pressure and peak aortic flow (simplified peak power, SPP) correlates with peak instantaneous power over a range of loading conditions in dogs. In humans, it can be estimated during exercise echocardiography, and correlates with maximum oxygen uptake better than ejection fraction or rate-pressure product.
SU-E-T-146: Beam Energy Spread Estimate Based On Bragg Peak Measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anferov, V; Derenchuk, V; Moore, R
2015-06-15
Purpose: ProNova is installing and commissioning a two room proton therapy system in Knoxville, TN. Beam energy out of the 230MeV cyclotron was measured on Jan 24, 2015. Cyclotron beam was delivered into a Zebra multi layered IC detector calibrated in terms of penetration range in water. The analysis of the measured Bragg peak determines penetration range in water which can be subsequently converted into proton beam energy. We extended this analysis to obtain an estimate of the beam energy spread out of the cyclotron. Methods: Using Monte Carlo simulations we established the correlation between Bragg peak shape parameters (widthmore » at 50% and 80% dose levels, distal falloff) and penetration range for a monoenergetic proton beam. For large uniform field impinging on a small area detector, we observed linear dependence of each Bragg peak parameter on beam penetration range as shown in Figure A. Then we studied how this correlation changes when the shape of Bragg peak is distorted by the beam focusing conditions. As shown in Figure B, small field size or diverging beam cause Bragg peak deformation predominantly in the proximal region. The distal shape of the renormalized Bragg peaks stays nearly constant. This excludes usage of Bragg peak width parameters for energy spread estimates. Results: The measured Bragg peaks had an average distal falloff of 4.86mm, which corresponds to an effective range of 35.5cm for a monoenergetic beam. The 32.7cm measured penetration range is 2.8cm less. Passage of a 230MeV proton beam through a 2.8cm thick slab of water results in a ±0.56MeV energy spread. As a final check, we confirmed agreement between shapes of the measured Bragg peak and one generated by Monte-Carlo code for proton beam with 0.56 MeV energy spread. Conclusion: Proton beam energy spread can be estimated using Bragg peak analysis.« less
Magnitude and Frequency of Floods on Nontidal Streams in Delaware
Ries, Kernell G.; Dillow, Jonathan J.A.
2006-01-01
Reliable estimates of the magnitude and frequency of annual peak flows are required for the economical and safe design of transportation and water-conveyance structures. This report, done in cooperation with the Delaware Department of Transportation (DelDOT) and the Delaware Geological Survey (DGS), presents methods for estimating the magnitude and frequency of floods on nontidal streams in Delaware at locations where streamgaging stations monitor streamflow continuously and at ungaged sites. Methods are presented for estimating the magnitude of floods for return frequencies ranging from 2 through 500 years. These methods are applicable to watersheds exhibiting a full range of urban development conditions. The report also describes StreamStats, a web application that makes it easy to obtain flood-frequency estimates for user-selected locations on Delaware streams. Flood-frequency estimates for ungaged sites are obtained through a process known as regionalization, using statistical regression analysis, where information determined for a group of streamgaging stations within a region forms the basis for estimates for ungaged sites within the region. One hundred and sixteen streamgaging stations in and near Delaware with at least 10 years of non-regulated annual peak-flow data available were used in the regional analysis. Estimates for gaged sites are obtained by combining the station peak-flow statistics (mean, standard deviation, and skew) and peak-flow estimates with regional estimates of skew and flood-frequency magnitudes. Example flood-frequency estimate calculations using the methods presented in the report are given for: (1) ungaged sites, (2) gaged locations, (3) sites upstream or downstream from a gaged location, and (4) sites between gaged locations. Regional regression equations applicable to ungaged sites in the Piedmont and Coastal Plain Physiographic Provinces of Delaware are presented. The equations incorporate drainage area, forest cover, impervious area, basin storage, housing density, soil type A, and mean basin slope as explanatory variables, and have average standard errors of prediction ranging from 28 to 72 percent. Additional regression equations that incorporate drainage area and housing density as explanatory variables are presented for use in defining the effects of urbanization on peak-flow estimates throughout Delaware for the 2-year through 500-year recurrence intervals, along with suggestions for their appropriate use in predicting development-affected peak flows. Additional topics associated with the analyses performed during the study are also discussed, including: (1) the availability and description of more than 30 basin and climatic characteristics considered during the development of the regional regression equations; (2) the treatment of increasing trends in the annual peak-flow series identified at 18 gaged sites, with respect to their relations with maximum 24-hour precipitation and housing density, and their use in the regional analysis; (3) calculation of the 90-percent confidence interval associated with peak-flow estimates from the regional regression equations; and (4) a comparison of flood-frequency estimates at gages used in a previous study, highlighting the effects of various improved analytical techniques.
Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations
NASA Astrophysics Data System (ADS)
Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.
2015-08-01
This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using borehole recordings had the smallest standard deviation among the estimated magnitudes and produced more stable and robust magnitude estimates. This suggests that incorporating borehole strong ground-motion records immediately available after the occurrence of large earthquakes can provide robust and accurate magnitude estimation.
Omang, R.J.; Parrett, Charles; Hull, J.A.
1983-01-01
Equations using channel-geometry measurements were developed for estimating mean runoff and peak flows of ungaged streams in southeastern Montana. Two separate sets of esitmating equations were developed for determining mean annual runoff: one for perennial streams and one for ephemeral and intermittent streams. Data from 29 gaged sites on perennial streams and 21 gaged sites on ephemeral and intermittent streams were used in these analyses. Data from 78 gaged sites were used in the peak-flow analyses. Southeastern Montana was divided into three regions and separate multiple-regression equations for each region were developed that relate channel dimensions to peak discharge having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. Channel-geometery relations were developed using measurements of the active-channel width and bankfull width. Active-channel width and bankfull width were the most significant channel features for estimating mean annual runoff for al types of streams. Use of this method requires that onsite measurements be made of channel width. The standard error of estimate for predicting mean annual runoff ranged from about 38 to 79 percent. The standard error of estimate relating active-channel width or bankfull width to peak flow ranged from about 37 to 115 percent. (USGS)
Baeza-Baeza, J J; Pous-Torres, S; Torres-Lapasió, J R; García-Alvarez-Coque, M C
2010-04-02
Peak broadening and skewness are fundamental parameters in chromatography, since they affect the resolution capability of a chromatographic column. A common practice to characterise chromatographic columns is to estimate the efficiency and asymmetry factor for the peaks of one or more solutes eluted at selected experimental conditions. This has the drawback that the extra-column contributions to the peak variance and skewness make the peak shape parameters depend on the retention time. We propose and discuss here the use of several approaches that allow the estimation of global parameters (non-dependent on the retention time) to describe the column performance. The global parameters arise from different linear relationships that can be established between the peak variance, standard deviation, or half-widths with the retention time. Some of them describe exclusively the column contribution to the peak broadening, whereas others consider the extra-column effects also. The estimation of peak skewness was also possible for the approaches based on the half-widths. The proposed approaches were applied to the characterisation of different columns (Spherisorb, Zorbax SB, Zorbax Eclipse, Kromasil, Chromolith, X-Terra and Inertsil), using the chromatographic data obtained for several diuretics and basic drugs (beta-blockers). Copyright (c) 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Jun; Jin, Xing; Wei, Yongxiang; Zhang, Hongcai
2013-10-01
In this article, the seismic records of Japan's Kik-net are selected to measure the acceleration, displacement, and effective peak acceleration of each seismic record within a certain time after P wave, then a continuous estimation is given on earthquake early warning magnitude through statistical analysis method, and Wenchuan earthquake record is utilized to check the method. The results show that the reliability of earthquake early warning magnitude continuously increases with the increase of the seismic information, the biggest residual happens if the acceleration is adopted to fit earthquake magnitude, which may be caused by rich high-frequency components and large dispersion of peak value in acceleration record, the influence caused by the high-frequency components can be effectively reduced if the effective peak acceleration and peak displacement is adopted, it is estimated that the dispersion of earthquake magnitude obviously reduces, but it is easy for peak displacement to be affected by long-period drifting. In various components, the residual enlargement phenomenon at vertical direction is almost unobvious, thus it is recommended in this article that the effective peak acceleration at vertical direction is preferred to estimate earthquake early warning magnitude. Through adopting Wenchuan strong earthquake record to check the method mentioned in this article, it is found that this method can be used to quickly, stably, and accurately estimate the early warning magnitude of this earthquake, which shows that this method is completely applicable for earthquake early warning.
Silicon-Based Quantum MOS Technology Development
2000-03-07
resonant interband tunnel diodes were demonstrated with peak current density greater than 104 A/cm2; peak-to-valley current ratio exceeding 2 was...photon emission reduce the peak-to-valley current ratio and device performance. Therefore, interband tunnel devices should be more resilient to...Comparison of bipolar interband tunnel and optical devices: (a) Esaki diode biased into the valley current region and (b) optical light emitter. The Esaki
NASA Astrophysics Data System (ADS)
Puhan, Pratap Sekhar; Ray, Pravat Kumar; Panda, Gayadhar
2016-12-01
This paper presents the effectiveness of 5/5 Fuzzy rule implementation in Fuzzy Logic Controller conjunction with indirect control technique to enhance the power quality in single phase system, An indirect current controller in conjunction with Fuzzy Logic Controller is applied to the proposed shunt active power filter to estimate the peak reference current and capacitor voltage. Current Controller based pulse width modulation (CCPWM) is used to generate the switching signals of voltage source inverter. Various simulation results are presented to verify the good behaviour of the Shunt active Power Filter (SAPF) with proposed two levels Hysteresis Current Controller (HCC). For verification of Shunt Active Power Filter in real time, the proposed control algorithm has been implemented in laboratory developed setup in dSPACE platform.
NASA Astrophysics Data System (ADS)
Ruiz-Bellet, Josep Lluís; Castelltort, Xavier; Balasch, J. Carles; Tuset, Jordi
2017-02-01
There is no clear, unified and accepted method to estimate the uncertainty of hydraulic modelling results. In historical floods reconstruction, due to the lower precision of input data, the magnitude of this uncertainty could reach a high value. With the objectives of giving an estimate of the peak flow error of a typical historical flood reconstruction with the model HEC-RAS and of providing a quick, simple uncertainty assessment that an end user could easily apply, the uncertainty of the reconstructed peak flow of a major flood in the Ebro River (NE Iberian Peninsula) was calculated with a set of local sensitivity analyses on six input variables. The peak flow total error was estimated at ±31% and water height was found to be the most influential variable on peak flow, followed by Manning's n. However, the latter, due to its large uncertainty, was the greatest contributor to peak flow total error. Besides, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation; all three methods gave similar peak flows. Manning's equation gave almost the same result than HEC-RAS. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed.
Motl, Robert W; Fernhall, Bo
2012-03-01
To examine the accuracy of predicting peak oxygen consumption (VO(2peak)) primarily from peak work rate (WR(peak)) recorded during a maximal, incremental exercise test on a cycle ergometer among persons with relapsing-remitting multiple sclerosis (RRMS) who had minimal disability. Cross-sectional study. Clinical research laboratory. Women with RRMS (n=32) and sex-, age-, height-, and weight-matched healthy controls (n=16) completed an incremental exercise test on a cycle ergometer to volitional termination. Not applicable. Measured and predicted VO(2peak) and WR(peak). There were strong, statistically significant associations between measured and predicted VO(2peak) in the overall sample (R(2)=.89, standard error of the estimate=127.4 mL/min) and subsamples with (R(2)=.89, standard error of the estimate=131.3 mL/min) and without (R(2)=.85, standard error of the estimate=126.8 mL/min) multiple sclerosis (MS) based on the linear regression analyses. Based on the 95% confidence limits for worst-case errors, the equation predicted VO(2peak) within 10% of its true value in 95 of every 100 subjects with MS. Peak VO(2) can be accurately predicted in persons with RRMS who have minimal disability as it is in controls by using established equations and WR(peak) recorded from a maximal, incremental exercise test on a cycle ergometer. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Visacro, Silverio; Guimaraes, Miguel; Murta Vale, Maria Helena
2017-12-01
First and subsequent return strokes' striking distances (SDs) were determined for negative cloud-to-ground flashes from high-speed videos exhibiting the development of positive and negative leaders and the pre-return stroke phase of currents measured along a short tower. In order to improve the results, a new criterion was used for the initiation and propagation of the sustained upward connecting leader, consisting of a 4 A continuous current threshold. An advanced approach developed from the combined use of this criterion and a reverse propagation procedure, which considers the calculated propagation speeds of the leaders, was applied and revealed that SDs determined solely from the first video frame showing the upward leader can be significantly underestimated. An original approach was proposed for a rough estimate of first strokes' SD using solely records of current. This approach combines the 4 A criterion and a representative composite three-dimensional propagation speed of 0.34 × 106 m/s for the leaders in the last 300 m propagated distance. SDs determined under this approach showed to be consistent with those of the advanced procedure. This approach was applied to determine the SD of 17 first return strokes of negative flashes measured at MCS, covering a wide peak-current range, from 18 to 153 kA. The estimated SDs exhibit very high dispersion and reveal great differences in relation to the SDs estimated for subsequent return strokes and strokes in triggered lightning.
Peak-flow characteristics of Wyoming streams
Miller, Kirk A.
2003-01-01
Peak-flow characteristics for unregulated streams in Wyoming are described in this report. Frequency relations for annual peak flows through water year 2000 at 364 streamflow-gaging stations in and near Wyoming were evaluated and revised or updated as needed. Analyses of historical floods, temporal trends, and generalized skew were included in the evaluation. Physical and climatic basin characteristics were determined for each gaging station using a geographic information system. Gaging stations with similar peak-flow and basin characteristics were grouped into six hydrologic regions. Regional statistical relations between peak-flow and basin characteristics were explored using multiple-regression techniques. Generalized least squares regression equations for estimating magnitudes of annual peak flows with selected recurrence intervals from 1.5 to 500 years were developed for each region. Average standard errors of estimate range from 34 to 131 percent. Average standard errors of prediction range from 35 to 135 percent. Several statistics for evaluating and comparing the errors in these estimates are described. Limitations of the equations are described. Methods for applying the regional equations for various circumstances are listed and examples are given.
Thermal power and heat energy of cloud-to-ground lightning process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xuejuan; Yuan, Ping; Xue, Simin
2016-07-15
A cloud-to-ground lightning flash with nine return strokes has been recorded using a high speed slitless spectrograph and a system composed of a fast antenna and a slow antenna. Based on the spectral data and the synchronous electric field changes that were caused by the lightning, the electrical conductivity, the channel radii, the resistance per unit length, the peak current, the thermal power at the instant of peak current, and the heat energy per unit length during the first 5 μs in the discharge channel have all been calculated. The results indicate that the channel radii have linear relationships with themore » peak current. The thermal power at the peak current time increases with increasing resistance, but exponential decays with the square of the peak current.« less
NASA Technical Reports Server (NTRS)
Willett, J. C.; LeVine, D. M.
2002-01-01
Direct current measurements are available near the attachment point from both natural cloud-to-ground lightning and rocket-triggered lightning, but little is known about the rise time and peak amplitude of return-stroke currents aloft. We present, as functions of height, current amplitudes, rise times, and effective propagation velocities that have been estimated with a novel remote-sensing technique from data on 24 subsequent return strokes in six different lightning flashes that were triggering at the NASA Kennedy Space Center, FL, during 1987. The unique feature of this data set is the stereo pairs of still photographs, from which three-dimensional channel geometries were determined previously. This has permitted us to calculate the fine structure of the electric-field-change (E) waveforms produced by these strokes, using the current waveforms measured at the channel base together with physically reasonable assumptions about the current distributions aloft. The computed waveforms have been compared with observed E waveforms from the same strokes, and our assumptions have been adjusted to maximize agreement. In spite of the non-uniqueness of solutions derived by this technique, several conclusions seem inescapable: 1) The effective propagation speed of the current up the channel is usually significantly (but not unreasonably) faster than the two-dimensional velocity measured by a streak camera for 14 of these strokes. 2) Given the deduced propagation speed, the peak amplitude of the current waveform often must decrease dramatically with height to prevent the electric field from being over-predicted. 3) The rise time of the current wave front must always increase rapidly with height in order to keep the fine structure of the calculated field consistent with the observations.
Rosa, Sarah N.; Oki, Delwyn S.
2010-01-01
Reliable estimates of the magnitude and frequency of floods are necessary for the safe and efficient design of roads, bridges, water-conveyance structures, and flood-control projects and for the management of flood plains and flood-prone areas. StreamStats provides a simple, fast, and reproducible method to define drainage-basin characteristics and estimate the frequency and magnitude of peak discharges in Hawaii?s streams using recently developed regional regression equations. StreamStats allows the user to estimate the magnitude of floods for streams where data from stream-gaging stations do not exist. Existing estimates of the magnitude and frequency of peak discharges in Hawaii can be improved with continued operation of existing stream-gaging stations and installation of additional gaging stations for areas where limited stream-gaging data are available.
Distribution of grizzly bears in the Greater Yellowstone Ecosystem, 2004
Schwartz, C.C.; Haroldson, M.A.; Gunther, K.; Moody, D.
2006-01-01
The US Fish and Wildlife Service (USFWS) proposed delisting the Yellowstone grizzly bear (Ursus arctos horribilis) in November 2005. Part of that process required knowledge of the most current distribution of the species. Here, we update an earlier estimate of occupied range (1990–2000) with data through 2004. We used kernel estimators to develop distribution maps of occupied habitats based on initial sightings of unduplicated females (n = 481) with cubs of the year, locations of radiomarked bears (n = 170), and spatially unique locations of conflicts, confrontations, and mortalities (n = 1,075). Although each data set was constrained by potential sampling bias, together they provided insight into areas in the Greater Yellowstone Ecosystem (GYE) currently occupied by grizzly bears. The current distribution of 37,258 km2 (1990–2004) extends beyond the distribution map generated with data from 1990–2000 (34,416 km2 ). Range expansion is particularly evident in parts of the Caribou–Targhee National Forest in Idaho and north of Spanish Peaks on the Gallatin National Forest in Montana.
McDonald, Scott A; van Boven, Michiel; Wallinga, Jacco
2017-07-01
Estimation of the national-level incidence of seasonal influenza is notoriously challenging. Surveillance of influenza-like illness is carried out in many countries using a variety of data sources, and several methods have been developed to estimate influenza incidence. Our aim was to obtain maximally informed estimates of the proportion of influenza-like illness that is true influenza using all available data. We combined data on weekly general practice sentinel surveillance consultation rates for influenza-like illness, virologic testing of sampled patients with influenza-like illness, and positive laboratory tests for influenza and other pathogens, applying Bayesian evidence synthesis to estimate the positive predictive value (PPV) of influenza-like illness as a test for influenza virus infection. We estimated the weekly number of influenza-like illness consultations attributable to influenza for nine influenza seasons, and for four age groups. The estimated PPV for influenza in influenza-like illness patients was highest in the weeks surrounding seasonal peaks in influenza-like illness rates, dropping to near zero in between-peak periods. Overall, 14.1% (95% credible interval [CrI]: 13.5%, 14.8%) of influenza-like illness consultations were attributed to influenza infection; the estimated PPV was 50% (95% CrI: 48%, 53%) for the peak weeks and 5.8% during the summer periods. The model quantifies the correspondence between influenza-like illness consultations and influenza at a weekly granularity. Even during peak periods, a substantial proportion of influenza-like illness-61%-was not attributed to influenza. The much lower proportion of influenza outside the peak periods reflects the greater circulation of other respiratory pathogens relative to influenza.
Oki, Delwyn S.; Rosa, Sarah N.; Yeung, Chiu W.
2010-01-01
This study provides an updated analysis of the magnitude and frequency of peak stream discharges in Hawai`i. Annual peak-discharge data collected by the U.S. Geological Survey during and before water year 2008 (ending September 30, 2008) at stream-gaging stations were analyzed. The existing generalized-skew value for the State of Hawai`i was retained, although three methods were used to evaluate whether an update was needed. Regional regression equations were developed for peak discharges with 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals for unregulated streams (those for which peak discharges are not affected to a large extent by upstream reservoirs, dams, diversions, or other structures) in areas with less than 20 percent combined medium- and high-intensity development on Kaua`i, O`ahu, Moloka`i, Maui, and Hawai`i. The generalized-least-squares (GLS) regression equations relate peak stream discharge to quantified basin characteristics (for example, drainage-basin area and mean annual rainfall) that were determined using geographic information system (GIS) methods. Each of the islands of Kaua`i,O`ahu, Moloka`i, Maui, and Hawai`i was divided into two regions, generally corresponding to a wet region and a dry region. Unique peak-discharge regression equations were developed for each region. The regression equations developed for this study have standard errors of prediction ranging from 16 to 620 percent. Standard errors of prediction are greatest for regression equations developed for leeward Moloka`i and southern Hawai`i. In general, estimated 100-year peak discharges from this study are lower than those from previous studies, which may reflect the longer periods of record used in this study. Each regression equation is valid within the range of values of the explanatory variables used to develop the equation. The regression equations were developed using peak-discharge data from streams that are mainly unregulated, and they should not be used to estimate peak discharges in regulated streams. Use of a regression equation beyond its limits will produce peak-discharge estimates with unknown error and should therefore be avoided. Improved estimates of the magnitude and frequency of peak discharges in Hawai`i will require continued operation of existing stream-gaging stations and operation of additional gaging stations for areas such as Moloka`i and Hawai`i, where limited stream-gaging data are available.
Agricultural ammonia emissions in China: reconciling bottom-up and top-down estimates
NASA Astrophysics Data System (ADS)
Zhang, Lin; Chen, Youfan; Zhao, Yuanhong; Henze, Daven K.; Zhu, Liye; Song, Yu; Paulot, Fabien; Liu, Xuejun; Pan, Yuepeng; Lin, Yi; Huang, Binxiang
2018-01-01
Current estimates of agricultural ammonia (NH3) emissions in China differ by more than a factor of 2, hindering our understanding of their environmental consequences. Here we apply both bottom-up statistical and top-down inversion methods to quantify NH3 emissions from agriculture in China for the year 2008. We first assimilate satellite observations of NH3 column concentration from the Tropospheric Emission Spectrometer (TES) using the GEOS-Chem adjoint model to optimize Chinese anthropogenic NH3 emissions at the 1/2° × 2/3° horizontal resolution for March-October 2008. Optimized emissions show a strong summer peak, with emissions about 50 % higher in summer than spring and fall, which is underestimated in current bottom-up NH3 emission estimates. To reconcile the latter with the top-down results, we revisit the processes of agricultural NH3 emissions and develop an improved bottom-up inventory of Chinese NH3 emissions from fertilizer application and livestock waste at the 1/2° × 2/3° resolution. Our bottom-up emission inventory includes more detailed information on crop-specific fertilizer application practices and better accounts for meteorological modulation of NH3 emission factors in China. We find that annual anthropogenic NH3 emissions are 11.7 Tg for 2008, with 5.05 Tg from fertilizer application and 5.31 Tg from livestock waste. The two sources together account for 88 % of total anthropogenic NH3 emissions in China. Our bottom-up emission estimates also show a distinct seasonality peaking in summer, consistent with top-down results from the satellite-based inversion. Further evaluations using surface network measurements show that the model driven by our bottom-up emissions reproduces the observed spatial and seasonal variations of NH3 gas concentrations and ammonium (NH4+) wet deposition fluxes over China well, providing additional credibility to the improvements we have made to our agricultural NH3 emission inventory.
FORTE Compact Intra-cloud Discharge Detection parameterized by Peak Current
NASA Astrophysics Data System (ADS)
Heavner, M. J.; Suszcynsky, D. M.; Jacobson, A. R.; Heavner, B. D.; Smith, D. A.
2002-12-01
The Los Alamos Sferic Array (EDOT) has recorded over 3.7 million lightning-related fast electric field change data records during April 1 - August 31, 2001 and 2002. The events were detected by three or more stations, allowing for differential-time-of-arrival location determination. The waveforms are characterized with estimated peak currents as well as by event type. Narrow Bipolar Events (NBEs), the VLF/LF signature of Compact Intra-cloud Discharges (CIDs), are generally isolated pulses with identifiable ionospheric reflections, permitting determination of event source altitudes. We briefly review the EDOT characterization of events. The FORTE satellite observes Trans-Ionospheric Pulse Pairs (TIPPs, the VHF satellite signature of CIDs). The subset of coincident EDOT and FORTE CID observations are compared with the total EDOT CID database to characterize the VHF detection efficiency of CIDs. The NBE polarity and altitude are also examined in the context of FORTE TIPP detection. The parameter-dependent detection efficiencies are extrapolated from FORTE orbit to GPS orbit in support of the V-GLASS effort (GPS based global detection of lightning).
Ogunjimi, Benson; Willem, Lander; Beutels, Philippe; Hens, Niel
2015-01-01
Varicella-zoster virus (VZV) causes chickenpox and reactivation of latent VZV causes herpes zoster (HZ). VZV reactivation is subject to the opposing mechanisms of declining and boosted VZV-specific cellular mediated immunity (CMI). A reduction in exogenous re-exposure ‘opportunities’ through universal chickenpox vaccination could therefore lead to an increase in HZ incidence. We present the first individual-based model that integrates within-host data on VZV-CMI and between-host transmission data to simulate HZ incidence. This model allows estimating currently unknown pivotal biomedical parameters, including the duration of exogenous boosting at 2 years, with a peak threefold to fourfold increase of VZV-CMI; the VZV weekly reactivation probability at 5% and VZV subclinical reactivation having no effect on VZV-CMI. A 100% effective chickenpox vaccine given to 1 year olds would cause a 1.75 times peak increase in HZ 31 years after implementation. This increase is predicted to occur mainly in younger age groups than is currently assumed. DOI: http://dx.doi.org/10.7554/eLife.07116.001 PMID:26259874
Hodgkins, Glenn A.; Stewart, Gregory J.; Cohn, Timothy A.; Dudley, Robert W.
2007-01-01
Large amounts of rain fell on southern Maine from the afternoon of April 15, 2007, to the afternoon of April 16, 2007, causing substantial damage to houses, roads, and culverts. This report provides an estimate of the peak flows on two rivers in southern Maine--the Mousam River and the Little Ossipee River--because of their severe flooding. The April 2007 estimated peak flow of 9,230 ft3/s at the Mousam River near West Kennebunk had a recurrence interval between 100 and 500 years; 95-percent confidence limits for this flow ranged from 25 years to greater than 500 years. The April 2007 estimated peak flow of 8,220 ft3/s at the Little Ossipee River near South Limington had a recurrence interval between 100 and 500 years; 95-percent confidence limits for this flow ranged from 50 years to greater than 500 years.
Gilad, O; Horesh, L; Holder, D S
2007-07-01
For the novel application of recording of resistivity changes related to neuronal depolarization in the brain with electrical impedance tomography, optimal recording is with applied currents below 100 Hz, which might cause neural stimulation of skin or underlying brain. The purpose of this work was to develop a method for application of low frequency currents to the scalp, which delivered the maximum current without significant stimulation of skin or underlying brain. We propose a recessed electrode design which enabled current injection with an acceptable skin sensation to be increased from 100 muA using EEG electrodes, to 1 mA in 16 normal volunteers. The effect of current delivered to the brain was assessed with an anatomically realistic finite element model of the adult head. The modelled peak cerebral current density was 0.3 A/m(2), which was 5 to 25-fold less than the threshold for stimulation of the brain estimated from literature review.
Large Footprint LiDAR Data Processing for Ground Detection and Biomass Estimation
NASA Astrophysics Data System (ADS)
Zhuang, Wei
Ground detection in large footprint waveform Light Detection And Ranging (LiDAR) data is important in calculating and estimating downstream products, especially in forestry applications. For example, tree heights are calculated as the difference between the ground peak and first returned signal in a waveform. Forest attributes, such as aboveground biomass, are estimated based on the tree heights. This dissertation investigated new metrics and algorithms for estimating aboveground biomass and extracting ground peak location in large footprint waveform LiDAR data. In the first manuscript, an accurate and computationally efficient algorithm, named Filtering and Clustering Algorithm (FICA), was developed based on a set of multiscale second derivative filters for automatically detecting the ground peak in an waveform from Land, Vegetation and Ice Sensor. Compared to existing ground peak identification algorithms, FICA was tested in different land cover type plots and showed improved accuracy in ground detections of the vegetation plots and similar accuracy in developed area plots. Also, FICA adopted a peak identification strategy rather than following a curve-fitting process, and therefore, exhibited improved efficiency. In the second manuscript, an algorithm was developed specifically for shrub waveforms. The algorithm only partially fitted the shrub canopy reflection and detected the ground peak by investigating the residual signal, which was generated by deducting a Gaussian fitting function from the raw waveform. After the deduction, the overlapping ground peak was identified as the local maximum of the residual signal. In addition, an applicability model was built for determining waveforms where the proposed PCF algorithm should be applied. In the third manuscript, a new set of metrics was developed to increase accuracy in biomass estimation models. The metrics were based on the results of Gaussian decomposition. They incorporated both waveform intensity represented by the area covered by a Gaussian function and its associated heights, which was the centroid of the Gaussian function. By considering signal reflection of different vegetation layers, the developed metrics obtained better estimation accuracy in aboveground biomass when compared to existing metrics. In addition, the new developed metrics showed strong correlation with other forest structural attributes, such as mean Diameter at Breast Height (DBH) and stem density. In sum, the dissertation investigated the various techniques for large footprint waveform LiDAR processing for detecting the ground peak and estimating biomass. The novel techniques developed in this dissertation showed better performance than existing methods or metrics.
Association between Infancy BMI Peak and Body Composition and Blood Pressure at Age 5–6 Years
Hof, Michel H. P.; Vrijkotte, Tanja G. M.; de Hoog, Marieke L. A.; van Eijsden, Manon; Zwinderman, Aeilko H.
2013-01-01
Introduction The development of overweight is often measured with the body mass index (BMI). During childhood the BMI curve has two characteristic points: the adiposity rebound at 6 years and the BMI peak at 9 months of age. In this study, the associations between the BMI peak and body composition measures and blood pressure at age 5–6 years were investigated. Methods Measurements from the Amsterdam Born Children and their Development (ABCD) study were available for this study. Blood pressure (systolic and diastolic) and body composition measures (BMI, waist-to-height ratio, fat percentage) were gathered during a health check at about 6 years of age (n = 2822). All children had multiple BMI measurements between the 0–4 years of age. For boys and girls separately, child-specific BMI peaks were extracted from mixed effect models. Associations between the estimated BMI peak and the health check measurements were analysed with linear models. In addition, we investigated the potential use of the BMI at 9 months as a surrogate measure for the magnitude of the BMI peak. Results After correction for the confounding effect of fetal growth, both timing and magnitude of the BMI peak were significantly and positively associated (p<0.001) with all body composition measures at the age of 5–6 years. The BMI peak showed no direct association with blood pressure at the age 5–6 year, but was mediated by the current BMI. The correlation between the magnitude of the BMI peak and BMI at 9 months was approximately 0.93 and similar associations with the measures at 5–6 years were found. Conclusion The magnitude of the BMI peak was associated with body composition measures at 5–6 years of age. Moreover, the BMI at 9 months could be used as surrogate measure for the magnitude of the BMI peak. PMID:24324605
NASA Astrophysics Data System (ADS)
Gu, Xin; Jiang, Bailing; Li, Hongtao; Liu, Cancan; Shao, Lianlian
2018-05-01
Micro-arc oxidation coatings were fabricated on 6061 aluminum alloy using whereby bipolar pulse mode in the case of different negative peak current densities. The phase composition, microstructures and wear properties were studied using x-ray diffraction, scanning electron microscopy and ball-on-disk wear tester, respectively. As results indicate, by virtue of negative peak current density, the oxygen can be expelled by produced hydrogen on anode in the case of negative pulse width and via the opened discharge channel. The results of x-ray diffraction, surface and cross-sectional morphology indicated that the coating was structured compactly taking on less small-diameter micro-pores and defects with negative peak current density of 75 A dm‑2. Additionally, as the results of wear tracks and weight loss bespeak, by virtue of appropriate negative peak current density, coatings resisted the abrasive wear and showed excellent wear resistance.
A model-based method for estimating Ca2+ release fluxes from linescan images in Xenopus oocytes.
Baran, Irina; Popescu, Anca
2009-09-01
We propose a model-based method of interpreting linescan images observed in Xenopus oocytes with the use of Oregon Green-1 as a fluorescent dye. We use a detailed modeling formalism based on numerical simulations that incorporate physical barriers for local diffusion, and, by assuming a Gaussian distribution of release durations, we derive the distributions of release Ca(2+) amounts and currents, fluorescence amplitudes, and puff widths. We analyze a wide set of available data collected from 857 and 281 events observed in the animal and the vegetal hemispheres of the oocyte, respectively. A relatively small fraction of events appear to involve coupling of two or three adjacent clusters of Ca(2+) releasing channels. In the animal hemisphere, the distribution of release currents with a mean of 1.4 pA presents a maximum at 1.0 pA and a rather long tail extending up to 5 pA. The overall distribution of liberated Ca(2+) amounts exhibits a dominant peak at 120 fC, a smaller peak at 375 fC, and an average of 166 fC. Ca(2+) amounts and release fluxes in the vegetal hemisphere appear to be 3.6 and 1.6 times smaller than in the animal hemisphere, respectively. Predicted diameters of elemental release sites are approximately 1.0 microm in the animal and approximately 0.5 microm in the vegetal hemisphere, but the side-to-side separation between adjacent sites appears to be identical (approximately 0.4 microm). By fitting the model to individual puffs we can estimate the quantity of liberated calcium, the release current, the orientation of the scan line, and the dimension of the corresponding release site.
Magnetic MIMO Signal Processing and Optimization for Wireless Power Transfer
NASA Astrophysics Data System (ADS)
Yang, Gang; Moghadam, Mohammad R. Vedady; Zhang, Rui
2017-06-01
In magnetic resonant coupling (MRC) enabled multiple-input multiple-output (MIMO) wireless power transfer (WPT) systems, multiple transmitters (TXs) each with one single coil are used to enhance the efficiency of simultaneous power transfer to multiple single-coil receivers (RXs) by constructively combining their induced magnetic fields at the RXs, a technique termed "magnetic beamforming". In this paper, we study the optimal magnetic beamforming design in a multi-user MIMO MRC-WPT system. We introduce the multi-user power region that constitutes all the achievable power tuples for all RXs, subject to the given total power constraint over all TXs as well as their individual peak voltage and current constraints. We characterize each boundary point of the power region by maximizing the sum-power deliverable to all RXs subject to their minimum harvested power constraints. For the special case without the TX peak voltage and current constraints, we derive the optimal TX current allocation for the single-RX setup in closed-form as well as that for the multi-RX setup. In general, the problem is a non-convex quadratically constrained quadratic programming (QCQP), which is difficult to solve. For the case of one single RX, we show that the semidefinite relaxation (SDR) of the problem is tight. For the general case with multiple RXs, based on SDR we obtain two approximate solutions by applying time-sharing and randomization, respectively. Moreover, for practical implementation of magnetic beamforming, we propose a novel signal processing method to estimate the magnetic MIMO channel due to the mutual inductances between TXs and RXs. Numerical results show that our proposed magnetic channel estimation and adaptive beamforming schemes are practically effective, and can significantly improve the power transfer efficiency and multi-user performance trade-off in MIMO MRC-WPT systems.
Mann, Michael P.; Rizzardo, Jule; Satkowski, Richard
2004-01-01
Accurate streamflow statistics are essential to water resource agencies involved in both science and decision-making. When long-term streamflow data are lacking at a site, estimation techniques are often employed to generate streamflow statistics. However, procedures for accurately estimating streamflow statistics often are lacking. When estimation procedures are developed, they often are not evaluated properly before being applied. Use of unevaluated or underevaluated flow-statistic estimation techniques can result in improper water-resources decision-making. The California State Water Resources Control Board (SWRCB) uses two key techniques, a modified rational equation and drainage basin area-ratio transfer, to estimate streamflow statistics at ungaged locations. These techniques have been implemented to varying degrees, but have not been formally evaluated. For estimating peak flows at the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals, the SWRCB uses the U.S. Geological Surveys (USGS) regional peak-flow equations. In this study, done cooperatively by the USGS and SWRCB, the SWRCB estimated several flow statistics at 40 USGS streamflow gaging stations in the north coast region of California. The SWRCB estimates were made without reference to USGS flow data. The USGS used the streamflow data provided by the 40 stations to generate flow statistics that could be compared with SWRCB estimates for accuracy. While some SWRCB estimates compared favorably with USGS statistics, results were subject to varying degrees of error over the region. Flow-based estimation techniques generally performed better than rain-based methods, especially for estimation of December 15 to March 31 mean daily flows. The USGS peak-flow equations also performed well, but tended to underestimate peak flows. The USGS equations performed within reported error bounds, but will require updating in the future as peak-flow data sets grow larger. Little correlation was discovered between estimation errors and geographic locations or various basin characteristics. However, for 25-percentile year mean-daily-flow estimates for December 15 to March 31, the greatest estimation errors were at east San Francisco Bay area stations with mean annual precipitation less than or equal to 30 inches, and estimated 2-year/24-hour rainfall intensity less than 3 inches.
Lombard, Pamela J.; Hodgkins, Glenn A.
2015-01-01
Regression equations to estimate peak streamflows with 1- to 500-year recurrence intervals (annual exceedance probabilities from 99 to 0.2 percent, respectively) were developed for small, ungaged streams in Maine. Equations presented here are the best available equations for estimating peak flows at ungaged basins in Maine with drainage areas from 0.3 to 12 square miles (mi2). Previously developed equations continue to be the best available equations for estimating peak flows for basin areas greater than 12 mi2. New equations presented here are based on streamflow records at 40 U.S. Geological Survey streamgages with a minimum of 10 years of recorded peak flows between 1963 and 2012. Ordinary least-squares regression techniques were used to determine the best explanatory variables for the regression equations. Traditional map-based explanatory variables were compared to variables requiring field measurements. Two field-based variables—culvert rust lines and bankfull channel widths—either were not commonly found or did not explain enough of the variability in the peak flows to warrant inclusion in the equations. The best explanatory variables were drainage area and percent basin wetlands; values for these variables were determined with a geographic information system. Generalized least-squares regression was used with these two variables to determine the equation coefficients and estimates of accuracy for the final equations.
A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.
Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo
2016-01-01
In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.
The capacity credit of grid-connected photovoltaic systems
NASA Astrophysics Data System (ADS)
Alsema, E. A.; van Wijk, A. J. M.; Turkenburg, W. C.
The capacity credit due photovoltaic (PV) power plants if integrated into the Netherlands grid was investigated, together with an estimate of the total allowable penetration. An hourly simulation was performed based on meteorological data from five stations and considering tilted surfaces, the current grid load pattern, and the load pattern after PV-power augmentation. The reliability of the grid was assessed in terms of a loss of load probability analysis, assuming power drops were limited to 1 GW. A projected tolerance for 2.5 GW of PV power was calculated. Peak demands were determined to be highest in winter, contrary to highest insolation levels; however, daily insolation levels coincided with daily peak demands. Combining the PV input with an equal amount of wind turbine power production was found to augment the capacity credit for both at aggregate outputs of 2-4 GW.
Method and apparatus for clockless analog-to-digital conversion and peak detection
DeGeronimo, Gianluigi
2007-03-06
An apparatus and method for analog-to-digital conversion and peak detection includes at least one stage, which includes a first switch, second switch, current source or capacitor, and discriminator. The discriminator changes state in response to a current or charge associated with the input signal exceeding a threshold, thereby indicating whether the current or charge associated with the input signal is greater than the threshold. The input signal includes a peak or a charge, and the converter includes a peak or charge detect mode in which a state of the switch is retained in response to a decrease in the current or charge associated with the input signal. The state of the switch represents at least a portion of a value of the peak or of the charge.
Kolva, J.R.
1985-01-01
A previous study of flood magitudes and frequencies in Ohio concluded that existing regionalized flood equations may not be adequate for estimating peak flows in small basins that are heavily forested, surface mined, or located in northwestern Ohio. In order to provide a large data base for improving estimation of flood peaks in these basins, 30 crest-stage gages were installed in 1977, in cooperation with the Ohio Department of Transportation, to provide a 10-year record of flood data The study area consists of two distinct parts: Northwestern Ohio, which contains 8 sites, and southern and eastern Ohio, which contains 22 sites in small forested or surface-mined drainage basins. Basin characteristics were determined for all 30 sites for 1978 conditions. Annual peaks were recorded or estimated for all 30 sites for water years 1978-82; an additional year of peak discharges was available at four sites. The 2-year (Q2) and 5-year (Q5) flood peaks were determined from these annual peaks.Q2 and Q5 values also were calculated using published regionalized regression equations for Ohio. The ratios of the observed to predicted 2-year (R2) and 5-year (R5) values were then calculated. This study found that observed flood peaks aree lower than estimated peaks by a significant amount in surface-mined basins. The average ratios of observed to predicted R2 values are 0.51 for basins with more than 40 percent surface-minded land, and 0.68 for sites with any surface-mined land. The average R5 value is 0.55 for sites with more than 40 percent surface-minded land, and 0.61 for sites with any surface-mined land. Estimated flood peaks from forested basins agree with the observed values fairly well. R2 values average 0.87 for sites with 20 percent or more forested land, but no surface-mined land, and R5 values average 0.96. If all sites with more than 20 percent forested land and some surface-mined land are considered, R2 the values average 0.86, and the R5 values average 0.82.
Estimation of ground motion parameters
Boore, David M.; Joyner, W.B.; Oliver, A.A.; Page, R.A.
1978-01-01
Strong motion data from western North America for earthquakes of magnitude greater than 5 are examined to provide the basis for estimating peak acceleration, velocity, displacement, and duration as a function of distance for three magnitude classes. A subset of the data (from the San Fernando earthquake) is used to assess the effects of structural size and of geologic site conditions on peak motions recorded at the base of structures. Small but statistically significant differences are observed in peak values of horizontal acceleration, velocity and displacement recorded on soil at the base of small structures compared with values recorded at the base of large structures. The peak acceleration tends to b3e less and the peak velocity and displacement tend to be greater on the average at the base of large structures than at the base of small structures. In the distance range used in the regression analysis (15-100 km) the values of peak horizontal acceleration recorded at soil sites in the San Fernando earthquake are not significantly different from the values recorded at rock sites, but values of peak horizontal velocity and displacement are significantly greater at soil sites than at rock sites. Some consideration is given to the prediction of ground motions at close distances where there are insufficient recorded data points. As might be expected from the lack of data, published relations for predicting peak horizontal acceleration give widely divergent estimates at close distances (three well known relations predict accelerations between 0.33 g to slightly over 1 g at a distance of 5 km from a magnitude 6.5 earthquake). After considering the physics of the faulting process, the few available data close to faults, and the modifying effects of surface topography, at the present time it would be difficult to accept estimates less than about 0.8 g, 110 cm/s, and 40 cm, respectively, for the mean values of peak acceleration, velocity, and displacement at rock sites within 5 km of fault rupture in a magnitude 6.5 earthquake. These estimates can be expected to change as more data become available.
Predicting Peak Flows following Forest Fires
NASA Astrophysics Data System (ADS)
Elliot, William J.; Miller, Mary Ellen; Dobre, Mariana
2016-04-01
Following forest fires, peak flows in perennial and ephemeral streams often increase by a factor of 10 or more. This increase in peak flow rate may overwhelm existing downstream structures, such as road culverts, causing serious damage to road fills at stream crossings. In order to predict peak flow rates following wildfires, we have applied two different tools. One is based on the U.S.D.A Natural Resource Conservation Service Curve Number Method (CN), and the other is by applying the Water Erosion Prediction Project (WEPP) to the watershed. In our presentation, we will describe the science behind the two methods, and present the main variables for each model. We will then provide an example of a comparison of the two methods to a fire-prone watershed upstream of the City of Flagstaff, Arizona, USA, where a fire spread model was applied for current fuel loads, and for likely fuel loads following a fuel reduction treatment. When applying the curve number method, determining the time to peak flow can be problematic for low severity fires because the runoff flow paths are both surface and through shallow lateral flow. The WEPP watershed version incorporates shallow lateral flow into stream channels. However, the version of the WEPP model that was used for this study did not have channel routing capabilities, but rather relied on regression relationships to estimate peak flows from individual hillslope polygon peak runoff rates. We found that the two methods gave similar results if applied correctly, with the WEPP predictions somewhat greater than the CN predictions. Later releases of the WEPP model have incorporated alternative methods for routing peak flows that need to be evaluated.
Auffhammer, Maximilian; Baylis, Patrick; Hausman, Catherine H.
2017-01-01
It has been suggested that climate change impacts on the electric sector will account for the majority of global economic damages by the end of the current century and beyond [Rose S, et al. (2014) Understanding the Social Cost of Carbon: A Technical Assessment]. The empirical literature has shown significant increases in climate-driven impacts on overall consumption, yet has not focused on the cost implications of the increased intensity and frequency of extreme events driving peak demand, which is the highest load observed in a period. We use comprehensive, high-frequency data at the level of load balancing authorities to parameterize the relationship between average or peak electricity demand and temperature for a major economy. Using statistical models, we analyze multiyear data from 166 load balancing authorities in the United States. We couple the estimated temperature response functions for total daily consumption and daily peak load with 18 downscaled global climate models (GCMs) to simulate climate change-driven impacts on both outcomes. We show moderate and heterogeneous changes in consumption, with an average increase of 2.8% by end of century. The results of our peak load simulations, however, suggest significant increases in the intensity and frequency of peak events throughout the United States, assuming today’s technology and electricity market fundamentals. As the electricity grid is built to endure maximum load, our findings have significant implications for the construction of costly peak generating capacity, suggesting additional peak capacity costs of up to 180 billion dollars by the end of the century under business-as-usual. PMID:28167756
Moody, John A.
2016-03-21
Extreme rainfall in September 2013 caused destructive floods in part of the Front Range in Boulder County, Colorado. Erosion from these floods cut roads and isolated mountain communities for several weeks, and large volumes of eroded sediment were deposited downstream, which caused further damage of property and infrastructures. Estimates of peak discharge for these floods and the associated rainfall characteristics will aid land and emergency managers in the future. Several methods (an ensemble) were used to estimate peak discharge at 21 measurement sites, and the ensemble average and standard deviation provided a final estimate of peak discharge and its uncertainty. Because of the substantial erosion and deposition of sediment, an additional estimate of peak discharge was made based on the flow resistance caused by sediment transport effects.Although the synoptic-scale rainfall was extreme (annual exceedance probability greater than 1,000 years, about 450 millimeters in 7 days) for these mountains, the resulting peak discharges were not. Ensemble average peak discharges per unit drainage area (unit peak discharge, [Qu]) for the floods were 1–2 orders of magnitude less than those for the maximum worldwide floods with similar drainage areas and had a wide range of values (0.21–16.2 cubic meters per second per square kilometer [m3 s-1 km-2]). One possible explanation for these differences was that the band of high-accumulation, high-intensity rainfall was narrow (about 50 kilometers wide), oriented nearly perpendicular to the predominant drainage pattern of the mountains, and therefore entire drainage areas were not subjected to the same range of extreme rainfall. A linear relation (coefficient of determination [R2]=0.69) between Qu and the rainfall intensity (ITc, computed for a time interval equal to the time-of-concentration for the drainage area upstream from each site), had the form: Qu=0.26(ITc-8.6), where the coefficient 0.26 can be considered to be an area-averaged peak runoff coefficient for the September 2013 rain storms in Boulder County, and the 8.6 millimeters per hour to be the rainfall intensity corresponding to a soil moisture threshold that controls the soil infiltration rate. Peak discharge estimates based on the sediment transport effects were generally less than the ensemble average and indicated that sediment transport may be a mechanism that limits velocities in these types of mountain streams such that the Froude number fluctuates about 1 suggesting that this type of floodflow can be approximated as critical flow.
Traffic evacuation time under nonhomogeneous conditions.
Fazio, Joseph; Shetkar, Rohan; Mathew, Tom V
2017-06-01
During many manmade and natural crises such as terrorist threats, floods, hazardous chemical and gas leaks, emergency personnel need to estimate the time in which people can evacuate from the affected urban area. Knowing an estimated evacuation time for a given crisis, emergency personnel can plan and prepare accordingly with the understanding that the actual evacuation time will take longer. Given the urban area to be evacuated, street widths exiting the area's perimeter, the area's population density, average vehicle occupancy, transport mode share and crawl speed, an estimation of traffic evacuation time can be derived. Peak-hour traffic data collected at three, midblock, Mumbai sites of varying geometric features and traffic composition were used in calibrating a model that estimates peak-hour traffic flow rates. Model validation revealed a correlation coefficient of +0.98 between observed and predicted peak-hour flow rates. A methodology is developed that estimates traffic evacuation time using the model.
Shifting Gravel and the Acoustic Detection Range of Killer Whale Calls
NASA Astrophysics Data System (ADS)
Bassett, C.; Thomson, J. M.; Polagye, B. L.; Wood, J.
2012-12-01
In environments suitable for tidal energy development, strong currents result in large bed stresses that mobilize sediments, producing sediment-generated noise. Sediment-generated noise caused by mobilization events can exceed noise levels attributed to other ambient noise sources at frequencies related to the diameters of the mobilized grains. At a site in Admiralty Inlet, Puget Sound, Washington, one year of ambient noise data (0.02 - 30 kHz) and current velocity data are combined. Peak currents at the site exceed 3.5 m/s. During slack currents, vessel traffic is the dominant noise source. When currents exceed 0.85 m/s noise level increases between 2 kHz and 30 kHz are correlated with near-bed currents and bed stress estimates. Acoustic spectrum levels during strong currents exceed quiescent slack tide conditions by 20 dB or more between 2 and 30 kHz. These frequencies are consistent with sound generated by the mobilization of gravel and pebbles. To investigate the implications of sediment-generated noise for post-installation passive acoustic monitoring of a planned tidal energy project, ambient noise conditions during slack currents and strong currents are combined with the characteristics of Southern Resident killer whale (Orcinus orca) vocalizations and sound propagation modeling. The reduction in detection range is estimated for common vocalizations under different ambient noise conditions. The importance of sediment-generated noise for passive acoustic monitoring at tidal energy sites for different marine mammal functional hearing groups and other sediment compositions are considered.
Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar
2015-06-01
Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.
Estimating the magnitude of peak flows at selected recurrence intervals for streams in Idaho
Berenbrock, Charles
2002-01-01
The region-of-influence method is not recommended for use in determining flood-frequency estimates for ungaged sites in Idaho because the results, overall, are less accurate and the calculations are more complex than those of regional regression equations. The regional regression equations were considered to be the primary method of estimating the magnitude and frequency of peak flows for ungaged sites in Idaho.
Waltemeyer, Scott D.
2008-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent (mean value is 62, and median value is 59) for the 100-year flood. The 1996 investigation standard error of prediction for the flood regions ranged from 41 to 96 percent (mean value is 67, and median value is 68) for the 100-year flood that was analyzed by using generalized least-squares regression analysis. Overall, the equations based on generalized least-squares regression techniques are more reliable than those in the 1996 report because of the increased length of record and improved geographic information system (GIS) method to determine basin and climatic characteristics. Flood-frequency estimates can be made for ungaged sites upstream or downstream from gaging stations by using a method that transfers flood-frequency data at the gaging station to the ungaged site by using a drainage-area ratio adjustment equation. The peak discharge for a given recurrence interval at the gaging station, drainage-area ratio, and the drainage-area exponent from the regional regression equation of the respective region is used to transfer the peak discharge for the recurrence interval to the ungaged site. Maximum observed peak discharge as related to drainage area was determined for New Mexico. Extreme events are commonly used in the design and appraisal of bridge crossings and other structures. Bridge-scour evaluations are commonly made by using the 500-year peak discharge for these appraisals. Peak-discharge data collected at 293 gaging stations and 367 miscellaneous sites were used to develop a maximum peak-discharge relation as an alternative method of estimating peak discharge of an extreme event such as a maximum probable flood.
Real-Time PCR Quantification Using A Variable Reaction Efficiency Model
Platts, Adrian E.; Johnson, Graham D.; Linnemann, Amelia K.; Krawetz, Stephen A.
2008-01-01
Quantitative real-time PCR remains a cornerstone technique in gene expression analysis and sequence characterization. Despite the importance of the approach to experimental biology the confident assignment of reaction efficiency to the early cycles of real-time PCR reactions remains problematic. Considerable noise may be generated where few cycles in the amplification are available to estimate peak efficiency. An alternate approach that uses data from beyond the log-linear amplification phase is explored with the aim of reducing noise and adding confidence to efficiency estimates. PCR reaction efficiency is regressed to estimate the per-cycle profile of an asymptotically departed peak efficiency, even when this is not closely approximated in the measurable cycles. The process can be repeated over replicates to develop a robust estimate of peak reaction efficiency. This leads to an estimate of the maximum reaction efficiency that may be considered primer-design specific. Using a series of biological scenarios we demonstrate that this approach can provide an accurate estimate of initial template concentration. PMID:18570886
Curran, Janet H.; Meyer, David F.; Tasker, Gary D.
2003-01-01
Estimates of the magnitude and frequency of peak streamflow are needed across Alaska for floodplain management, cost-effective design of floodway structures such as bridges and culverts, and other water-resource management issues. Peak-streamflow magnitudes for the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were computed for 301 streamflow-gaging and partial-record stations in Alaska and 60 stations in conterminous basins of Canada. Flows were analyzed from data through the 1999 water year using a log-Pearson Type III analysis. The State was divided into seven hydrologically distinct streamflow analysis regions for this analysis, in conjunction with a concurrent study of low and high flows. New generalized skew coefficients were developed for each region using station skew coefficients for stations with at least 25 years of systematic peak-streamflow data. Equations for estimating peak streamflows at ungaged locations were developed for Alaska and conterminous basins in Canada using a generalized least-squares regression model. A set of predictive equations for estimating the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year peak streamflows was developed for each streamflow analysis region from peak-streamflow magnitudes and physical and climatic basin characteristics. These equations may be used for unregulated streams without flow diversions, dams, periodically releasing glacial impoundments, or other streamflow conditions not correlated to basin characteristics. Basin characteristics should be obtained using methods similar to those used in this report to preserve the statistical integrity of the equations.
NASA Astrophysics Data System (ADS)
Ludeno, Giovanni; Soldovieri, Francesco; Serafino, Francesco; Lugni, Claudio; Fucile, Fabio; Bulian, Gabriele
2016-04-01
X-band radar system is able to provide information about direction and intensity of the sea surface currents and dominant waves in a range of few kilometers from the observation point (up to 3 nautical miles). This capability, together with their flexibility and low cost, makes these devices useful tools for the sea monitoring either coastal or off-shore area. The data collected from wave radar system can be analyzed by using the inversion strategy presented in [1,2] to obtain the estimation of the following sea parameters: peak wave direction; peak period; peak wavelength; significant wave height; sea surface current and bathymetry. The estimation of the significant wave height represents a limitation of the wave radar system because of the radar backscatter is not directly related to the sea surface elevation. In fact, in the last period, substantial research has been carried out to estimate significant wave height from radar images either with or without calibration using in-situ measurements. In this work, we will present two alternative approaches for the reconstruction of the sea surface elevation from wave radar images. In particular, the first approach is based on the basis of an approximated version of the modulation transfer function (MTF) tuned from a series of numerical simulation, following the line of[3]. The second approach is based on the inversion of radar images using a direct regularised least square technique. Assuming a linearised model for the tilt modulation, the sea elevation has been reconstructed as a least square fitting of the radar imaging data[4]. References [1]F. Serafino, C. Lugni, and F. Soldovieri, "A novel strategy for the surface current determination from marine X-band radar data," IEEE Geosci.Remote Sens. Lett., vol. 7, no. 2, pp. 231-235, Apr. 2010. [2]Ludeno, G., Brandini, C., Lugni, C., Arturi, D., Natale, A., Soldovieri, F., Serafino, F. (2014). Remocean System for the Detection of the Reflected Waves from the Costa Concordia Ship Wreck. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7(7). [3]Nieto Borge, J., Rodriguez, G.R., Hessner, K., González, P.I., (2004). Inversion of Marine Radar Images for Surface Wave Analysis. J. Atmos. Oceanic Technol. 21, 1291-1300. [4] Fucile, F., Ludeno, G., Serafino, F.,Bulian, G., Soldovieri, F., Lugni, C. "Some challenges in recovering wave features from a wave radar system". Paper submitted to the International Ocean and Polar Engineering Conference, ISOPE, Rhodes 2016
Reduced rank models for travel time estimation of low order mode pulses.
Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M
2013-10-01
Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.
Hao, Jie; Astle, William; De Iorio, Maria; Ebbels, Timothy M D
2012-08-01
Nuclear Magnetic Resonance (NMR) spectra are widely used in metabolomics to obtain metabolite profiles in complex biological mixtures. Common methods used to assign and estimate concentrations of metabolites involve either an expert manual peak fitting or extra pre-processing steps, such as peak alignment and binning. Peak fitting is very time consuming and is subject to human error. Conversely, alignment and binning can introduce artefacts and limit immediate biological interpretation of models. We present the Bayesian automated metabolite analyser for NMR spectra (BATMAN), an R package that deconvolutes peaks from one-dimensional NMR spectra, automatically assigns them to specific metabolites from a target list and obtains concentration estimates. The Bayesian model incorporates information on characteristic peak patterns of metabolites and is able to account for shifts in the position of peaks commonly seen in NMR spectra of biological samples. It applies a Markov chain Monte Carlo algorithm to sample from a joint posterior distribution of the model parameters and obtains concentration estimates with reduced error compared with conventional numerical integration and comparable to manual deconvolution by experienced spectroscopists. http://www1.imperial.ac.uk/medicine/people/t.ebbels/ t.ebbels@imperial.ac.uk.
Lopatka, Martin; Barcaru, Andrei; Sjerps, Marjan J; Vivó-Truyols, Gabriel
2016-01-29
Accurate analysis of chromatographic data often requires the removal of baseline drift. A frequently employed strategy strives to determine asymmetric weights in order to fit a baseline model by regression. Unfortunately, chromatograms characterized by a very high peak saturation pose a significant challenge to such algorithms. In addition, a low signal-to-noise ratio (i.e. s/n<40) also adversely affects accurate baseline correction by asymmetrically weighted regression. We present a baseline estimation method that leverages a probabilistic peak detection algorithm. A posterior probability of being affected by a peak is computed for each point in the chromatogram, leading to a set of weights that allow non-iterative calculation of a baseline estimate. For extremely saturated chromatograms, the peak weighted (PW) method demonstrates notable improvement compared to the other methods examined. However, in chromatograms characterized by low-noise and well-resolved peaks, the asymmetric least squares (ALS) and the more sophisticated Mixture Model (MM) approaches achieve superior results in significantly less time. We evaluate the performance of these three baseline correction methods over a range of chromatographic conditions to demonstrate the cases in which each method is most appropriate. Copyright © 2016 Elsevier B.V. All rights reserved.
van Schie, Carine H M; Slim, Frederik J; Keukenkamp, Renske; Faber, William R; Nollet, Frans
2013-03-01
Not only plantar pressure but also weight-bearing activity affects accumulated mechanical stress to the foot and may be related to foot ulceration. To date, activity has not been accounted for in leprosy. The purpose was to compare barefoot pressure, in-shoe pressure and daily cumulative stress between persons affected by leprosy with and without previous or current foot ulceration. Nine persons with current plantar ulceration were compared to 15 with previous and 15 without previous ulceration. Barefoot peak pressure (EMED-X), in-shoe peak pressure (Pedar-X) and daily cumulative stress (in-shoe forefoot pressure time integral×mean daily strides (Stepwatch™ Activity Monitor)) were measured. Barefoot peak pressure was increased in persons with current and previous compared to no previous foot ulceration (mean±SD=888±222 and 763±335 vs 465±262kPa, p<0.05). In-shoe peak pressure was only increased in persons with current compared to without previous ulceration (mean±SD=412±145 vs 269±70kPa, p<0.05). Daily cumulative stress was not different between groups, although persons with current and previous foot ulceration were less active. Although barefoot peak pressure was increased in people with current and previous plantar ulceration, it did not discriminate between these groups. While in-shoe peak pressure was increased in persons with current ulceration, they were less active, resulting in no difference in daily cumulative stress. Increased in-shoe peak pressure suggests insufficient pressure reducing footwear in persons with current ulceration, highlighting the importance of pressure reducing qualities of footwear. Copyright © 2012 Elsevier B.V. All rights reserved.
Data preprocessing method for liquid chromatography-mass spectrometry based metabolomics.
Wei, Xiaoli; Shi, Xue; Kim, Seongho; Zhang, Li; Patrick, Jeffrey S; Binkley, Joe; McClain, Craig; Zhang, Xiang
2012-09-18
A set of data preprocessing algorithms for peak detection and peak list alignment are reported for analysis of liquid chromatography-mass spectrometry (LC-MS)-based metabolomics data. For spectrum deconvolution, peak picking is achieved at the selected ion chromatogram (XIC) level. To estimate and remove the noise in XICs, each XIC is first segmented into several peak groups based on the continuity of scan number, and the noise level is estimated by all the XIC signals, except the regions potentially with presence of metabolite ion peaks. After removing noise, the peaks of molecular ions are detected using both the first and the second derivatives, followed by an efficient exponentially modified Gaussian-based peak deconvolution method for peak fitting. A two-stage alignment algorithm is also developed, where the retention times of all peaks are first transferred into the z-score domain and the peaks are aligned based on the measure of their mixture scores after retention time correction using a partial linear regression. Analysis of a set of spike-in LC-MS data from three groups of samples containing 16 metabolite standards mixed with metabolite extract from mouse livers demonstrates that the developed data preprocessing method performs better than two of the existing popular data analysis packages, MZmine2.6 and XCMS(2), for peak picking, peak list alignment, and quantification.
A Data Pre-processing Method for Liquid Chromatography Mass Spectrometry-based Metabolomics
Wei, Xiaoli; Shi, Xue; Kim, Seongho; Zhang, Li; Patrick, Jeffrey S.; Binkley, Joe; McClain, Craig; Zhang, Xiang
2012-01-01
A set of data pre-processing algorithms for peak detection and peak list alignment are reported for analysis of LC-MS based metabolomics data. For spectrum deconvolution, peak picking is achieved at selected ion chromatogram (XIC) level. To estimate and remove the noise in XICs, each XIC is first segmented into several peak groups based on the continuity of scan number, and the noise level is estimated by all the XIC signals, except the regions potentially with presence of metabolite ion peaks. After removing noise, the peaks of molecular ions are detected using both the first and the second derivatives, followed by an efficient exponentially modified Gaussian-based peak deconvolution method for peak fitting. A two-stage alignment algorithm is also developed, where the retention times of all peaks are first transferred into z-score domain and the peaks are aligned based on the measure of their mixture scores after retention time correction using a partial linear regression. Analysis of a set of spike-in LC-MS data from three groups of samples containing 16 metabolite standards mixed with metabolite extract from mouse livers, demonstrates that the developed data pre-processing methods performs better than two of the existing popular data analysis packages, MZmine2.6 and XCMS2, for peak picking, peak list alignment and quantification. PMID:22931487
Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C
2011-12-01
Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Thermoluminescence of quartz collected from Nojima Fault Trench excavated in 2015
NASA Astrophysics Data System (ADS)
Hasebe, N.; Miura, K.; Ganzawa, Y.; Tagami, T.; Lin, A.
2017-12-01
The Southern Hyogo prefecture earthquake occurred in 1995, which is known as Kobe Earthquake or Great Hanshin-Awaji Earthquake, was caused by the activity of the Nojima fault. The research project on the Nojima fault is currently going on and new trench was excavated in 2015. We investigate the effect of fault activity on surrounding rocks by thermoluminescence (TL) dating method. First, quartz were extracted from samples collected from the trench wall with different distance from the fault. A block of nearby basement rock is also collected and analyzed. Next, the luminescence sites and their emission temperatures were determined by T-Tmax method (McKeever, 1980) perfomed by 10 ° C interval for selected samples (the basement rock collected from Rokko granite, the granite sample collected about 5 m away from the fault in the trench, and the gouge sample adjacent to the fault). As a result, the peak emission temperatures were 200-220 ° C, 270 ° C and 320-350 ° C for granite quartz. These values were concordant for UV-TL and Blue TL. The activation energy and frequency factors were determined for signals emitted at different temperatures by peak shift methods (Aitken, 1985). On the other hand, the TL emission curves for the sample adjacent to the fault do not show discrete luminescence sites, different from granite samples. Natural TL emission show variety of TL profile. The accumulated doses of each sample were estimated for identified signal peaks after peak separation. Signals from different peak temperatures show different dose values in all the samples. The dose estimated by signals at 200 ° showed the minimum value for all samples. The same sample show different accumulated dose for Blue TL and UV-TL. The variety of accumulated doses in a sample may be reflective of complex thermal history of samples, and/or partly caused by the ineffective peak separation. Even the host rock collected away from the fault show a low accumulated dose in 200°C singnal, far less than the expected saturated value. Further investigation is important to fully understand the meaning of obtained data.
Mahmood, Iftekhar
2004-01-01
The objective of this study was to evaluate the performance of Wagner-Nelson, Loo-Reigelman, and statistical moments methods in determining the absorption rate constant(s) in the presence of a secondary peak. These methods were also evaluated when there were two absorption rates without a secondary peak. Different sets of plasma concentration versus time data for a hypothetical drug following one or two compartment models were generated by simulation. The true ka was compared with the ka estimated by Wagner-Nelson, Loo-Riegelman and statistical moments methods. The results of this study indicate that Wagner-Nelson, Loo-Riegelman and statistical moments methods may not be used for the estimation of absorption rate constants in the presence of a secondary peak or when absorption takes place with two absorption rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abd El-Kader, F.H.; Ibrahim, S.S.; Attia, G.
1993-11-15
The influence of neutron irradiation on ultraviolet/visible absorption and thermally stimulated depolarization current in nickel chloride-poly(vinyl alcohol) (PVA) cast films has been investigated. The spectral measurements indicate the responsibility of the Ni[sup 2][sup +] ion in its octahedral symmetry. Dopant concentrations higher than 10 wt % NiCl[sub 2] are found to make the samples more resistant to a degradation effect caused by neutron irradiation. The thermally stimulated depolarization currents (TSDC) of pure PVA revealed the existence of the glass transition T[sub g] and space charge relaxation peaks, whereas doped-PVA samples show a new sub-T[sub g] relaxation peak. A proposed mechanismmore » is introduced to account for the neutron effects on both glass transition and space charge relaxation peaks. The peak positions, peak currents, and stored charges of the sub-T[sub g] relaxation peak are strongly affected by both the concentration of the dopant and neutron exposure doses.« less
NASA Technical Reports Server (NTRS)
Uhlhorn, Eric; Atlas, Robert; Black, Peter; Buckley, Courtney; Chen, Shuyi; El-Nimri, Salem; Hood, Robbie; Johnson, James; Jones, Linwood; Miller, Timothy;
2009-01-01
The Hurricane Imaging Radiometer (HIRAD) is a new airborne microwave remote sensor currently under development to enhance real-time hurricane ocean surface wind observations. HIRAD builds on the capabilities of the Stepped Frequency Microwave Radiometer (SFMR), which now operates on NOAA P-3, G-4, and AFRC C-130 aircraft. Unlike the SFMR, which measures wind speed and rain rate along the ground track directly beneath the aircraft, HIRAD will provide images of the surface wind and rain field over a wide swath (approximately 3 times the aircraft altitude). To demonstrate potential improvement in the measurement of peak hurricane winds, we present a set of Observing System Simulation Experiments (OSSEs) in which measurements from the new instrument as well as those from existing platforms (air, surface, and space-based) are simulated from the output of a high-resolution (approximately 1.7 km) numerical model. Simulated retrieval errors due to both instrument noise as well as model function accuracy are considered over the expected range of incidence angles, wind speeds and rain rates. Based on numerous simulated flight patterns and data source combinations, statistics are developed to describe relationships between the observed and true (from the model s perspective) peak wind speed. These results have implications for improving the estimation of hurricane intensity (as defined by the peak sustained wind anywhere in the storm), which may often go un-observed due to sampling limitations.
Techniques for estimating flood hydrographs for ungaged urban watersheds
Stricker, V.A.; Sauer, V.B.
1984-01-01
The Clark Method, modified slightly was used to develop a synthetic, dimensionless hydrograph which can be used to estimate flood hydrographs for ungaged urban watersheds. Application of the technique results in a typical (average) flood hydrograph for a given peak discharge. Input necessary to apply the technique is an estimate of basin lagtime and the recurrence interval peak discharge. Equations for this purpose were obtained from a recent nationwide study on flood frequency in urban watersheds. A regression equation was developed which relates flood volumes to drainage area size, basin lagtime, and peak discharge. This equation is useful where storage of floodwater may be a part of design of flood prevention. (USGS)
Contribution For Arc Temperature Affected By Current Increment Ratio At Peak Current In Pulsed Arc
NASA Astrophysics Data System (ADS)
Kano, Ryota; Mitubori, Hironori; Iwao, Toru
2015-11-01
Tungsten Inert Gas (TIG) Welding is one of the high quality welding. However, parameters of the pulsed arc welding are many and complicated. if the welding parameters are not appropriate, the welding pool shape becomes wide and shallow.the convection of driving force contributes to the welding pool shape. However, in the case of changing current waveform as the pulse high frequency TIG welding, the arc temperature does not follow the change of the current. Other result of the calculation, in particular, the arc temperature at the reaching time of peak current is based on these considerations. Thus, the accurate measurement of the temperature at the time is required. Therefore, the objective of this research is the elucidation of contribution for arc temperature affected by current increment ratio at peak current in pulsed arc. It should obtain a detail knowledge of the welding model in pulsed arc. The temperature in the case of increment of the peak current from the base current is measured by using spectroscopy. As a result, when the arc current increases from 100 A to 150 A at 120 ms, the transient response of the temperature didn't occur during increasing current. Thus, during the current rise, it has been verified by measuring. Therefore, the contribution for arc temperature affected by current increment ratio at peak current in pulsed arc was elucidated in order to obtain more knowledge of welding model of pulsed arc.
Pulse charging of lead-acid traction cells
NASA Technical Reports Server (NTRS)
Smithrick, J. J.
1980-01-01
Pulse charging, as a method of rapidly and efficiently charging 300 amp-hour lead-acid traction cells for an electric vehicle application was investigated. A wide range of charge pulse current square waveforms were investigated and the results were compared to constant current charging at the time averaged pulse current values. Representative pulse current waveforms were: (1) positive waveform-peak charge pulse current of 300 amperes (amps), discharge pulse-current of zero amps, and a duty cycle of about 50%; (2) Romanov waveform-peak charge pulse current of 300 amps, peak discharge pulse current of 15 amps, and a duty of 50%; and (3) McCulloch waveform peak charge pulse current of 193 amps, peak discharge pulse current of about 575 amps, and a duty cycle of 94%. Experimental results indicate that on the basis of amp-hour efficiency, pulse charging offered no significant advantage as a method of rapidly charging 300 amp-hour lead-acid traction cells when compared to constant current charging at the time average pulse current value. There were, however, some disadvantages of pulse charging in particular a decrease in charge amp-hour and energy efficiencies and an increase in cell electrolyte temperature. The constant current charge method resulted in the best energy efficiency with no significant sacrifice of charge time or amp-hour output. Whether or not pulse charging offers an advantage over constant current charging with regard to the cell charge/discharge cycle life is unknown at this time.
Computational Fluid Dynamics simulations of the Late Pleistocene Lake Bonneville Flood
NASA Astrophysics Data System (ADS)
Abril-Hernández, José M.; Periáñez, Raúl; O'Connor, Jim E.; Garcia-Castellanos, Daniel
2018-06-01
At approximately 18.0 ka, pluvial Lake Bonneville reached its maximum level. At its northeastern extent it was impounded by alluvium of the Marsh Creek Fan, which breached at some point north of Red Rock Pass (Idaho), leading to one of the largest floods on Earth. About 5320 km3 of water was discharged into the Snake River drainage and ultimately into the Columbia River. We use a 0D model and a 2D non-linear depth-averaged hydrodynamic model to aid understanding of outflow dynamics, specifically evaluating controls on the amount of water exiting the Lake Bonneville basin exerted by the Red Rock Pass outlet lithology and geometry as well as those imposed by the internal lake geometry of the Bonneville basin. These models are based on field evidence of prominent lake levels, hypsometry and terrain elevations corrected for post-flood isostatic deformation of the lake basin, as well as reconstructions of the topography at the outlet for both the initial and final stages of the flood. Internal flow dynamics in the northern Lake Bonneville basin during the flood were affected by the narrow passages separating the Cache Valley from the main body of Lake Bonneville. This constriction imposed a water-level drop of up to 2.7 m at the time of peak-flow conditions and likely reduced the peak discharge at the lake outlet by about 6%. The modeled peak outlet flow is 0.85·106 m3 s-1. Energy balance calculations give an estimate for the erodibility coefficient for the alluvial Marsh Creek divide of ∼0.005 m y-1 Pa-1.5, at least two orders of magnitude greater than for the underlying bedrock at the outlet. Computing quasi steady-state water flows, water elevations, water currents and shear stresses as a function of the water-level drop in the lake and for the sequential stages of erosion in the outlet gives estimates of the incision rates and an estimate of the outflow hydrograph during the Bonneville Flood: About 18 days would have been required for the outflow to grow from 10% to 100% of its peak value. At the time of peak flow, about 10% of the lake volume would have already exited; eroding about 1 km3 of alluvium from the outlet, and the lake level would have dropped by about 10.6 m.
NASA Astrophysics Data System (ADS)
Tomioka, N.; Tani, R.; Kayama, M.; Chang, Y.; Nishido, H.; Kaushik, D.; Rae, A.; Ferrière, L.; Gulick, S. P. S.; Morgan, J. V.
2017-12-01
The Chicxulub impact structure, located in the northern Yucatan Peninsula, Mexico, was drilled by the joint IODP-ICDP Expedition 364 in April-May 2016. This expedition is the first attempt to obtain materials from the topographic peak ring within the crater previously identified by seismic imaging. A continuous core was successfully recovered from the peak ring at depths between 505.7 and 1334.7 mbsf. Uplifted, fractured, and shocked granitic basement rocks forming the peak ring were found below, in the impact breccia and impact melt rock unit (747.0-1334.7 mbsf; Morgan et al. 2016). In order to constrain impact crater formation, we investigated shock pressure distribution in the peak-ring basement rocks. Thin sections of the granitic rocks were prepared at intervals of 60 m. All the samples contains shocked minerals, with quartz grains frequently showing planar deformation features (PDFs). We determined shock pressures based on the cathodoluminescence (CL) spectroscopy of quartz. The strong advantage of the CL method is its applicability to shock pressure estimation for individual grains for both quartz and diaplectic SiO2 glass with high-spatial resolution ( 1 μm) (Chang et al. 2016). CL spectra of quartz shows a blue emission band caused by shock-induced defect centers, where its intensity increases with shock pressure. A total of 108 quartz grains in ten thin sections were analyzed using a scanning electron microscope with a CL spectrometer attached (an acceleration voltage of 15 kV and a beam current of 2 nA were used). Natural quartz single crystals, which were experimentally shocked at 0-30 GPa, were used for pressure calibration. CL spectra of all the quartz grains in the basement rocks showed broad blue emission band at the wavelength range of 300-500 nm and estimated shock pressures were in the range of 15-20 GPa. The result is consistent with values obtained from PDFs analysis in quartz using the universal stage (Ferrière et al. 2017; Rae et al. 2017). Although shock pressure gradient in the drilled section is small, the pressure slightly increases at depths of 1113.7 and 1167.0 m. The shock pressure variation could be due to dynamic perturbation of the basement rock during peak ring formation.
Effect of Response Reduction Factor on Peak Floor Acceleration Demand in Mid-Rise RC Buildings
NASA Astrophysics Data System (ADS)
Surana, Mitesh; Singh, Yogendra; Lang, Dominik H.
2017-06-01
Estimation of Peak Floor Acceleration (PFA) demand along the height of a building is crucial for the seismic safety of nonstructural components. The effect of the level of inelasticity, controlled by the response reduction factor (strength ratio), is studied using incremental dynamic analysis. A total of 1120 nonlinear dynamic analyses, using a suite of 30 recorded ground motion time histories, are performed on mid-rise reinforced-concrete (RC) moment-resisting frame buildings covering a wide range in terms of their periods of vibration. The obtained PFA demands are compared with some of the major national seismic design and retrofit codes (IS 1893 draft version, ASCE 41, EN 1998, and NZS 1170.4). It is observed that the PFA demand at the building's roof level decreases with increasing period of vibration as well as with strength ratio. However, current seismic building codes do not account for these effects thereby producing very conservative estimates of PFA demands. Based on the identified parameters affecting the PFA demand, a model to obtain the PFA distribution along the height of a building is proposed. The proposed model is validated with spectrum-compatible time history analyses of the considered buildings with different strength ratios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, D.; Winkler, J.
As energy-efficiency efforts focus increasingly on existing homes, we scratch our heads about construction decisions made 30, 40, 50-years ago and ask: 'What were they thinking?' A logical follow-on question is: 'What will folks think in 2050 about the homes we're building today?' This question can lead to a lively discussion, but the current practice that we find most alarming is placing ducts in the attic. In this paper, we explore through literature and analysis the impact duct location has on cooling load, peak demand, and energy cost in hot climates. For a typical new home in these climates, wemore » estimate that locating ducts in attics rather than inside conditioned space increases the cooling load 0.5 to 1 ton, increases cooling costs 15% and increases demand by 0.75 kW. The aggregate demand to service duct loss in homes built in Houston, Las Vegas, and Phoenix during the period 2000 through 2009 is estimated to be 700 MW. We present options for building homes with ducts in conditioned space and demonstrate that these options compare favorably with other common approaches to achieving electricity peak demand and consumption savings in homes.« less
Estimation of the optical errors on the luminescence imaging of water for proton beam
NASA Astrophysics Data System (ADS)
Yabe, Takuya; Komori, Masataka; Horita, Ryo; Toshito, Toshiyuki; Yamamoto, Seiichi
2018-04-01
Although luminescence imaging of water during proton-beam irradiation can be applied to range estimation, the height of the Bragg peak of the luminescence image was smaller than that measured with an ionization chamber. We hypothesized that the reasons of the difference were attributed to the optical phenomena; parallax errors of the optical system and the reflection of the luminescence from the water phantom. We estimated the errors cause by these optical phenomena affecting the luminescence image of water. To estimate the parallax error on the luminescence images, we measured the luminescence images during proton-beam irradiation using a cooled charge-coupled camera by changing the heights of the optical axis of the camera from those of the Bragg peak. When the heights of the optical axis matched to the depths of the Bragg peak, the Bragg peak heights in the depth profiles were the highest. The reflection of the luminescence of water with a black wall phantom was slightly smaller than that with a transparent phantom and changed the shapes of the depth profiles. We conclude that the parallax error significantly affects the heights of the Bragg peak and the reflection of the phantom affects the shapes of depth profiles of the luminescence images of water.
Agreement Between VO2peak Predicted From PACER and One-Mile Run Time-Equated Laps.
Saint-Maurice, Pedro F; Anderson, Katelin; Bai, Yang; Welk, Gregory J
2016-12-01
This study examined the agreement between estimated peak oxygen consumption (VO 2peak ) obtained from the Progressive Aerobic Cardiovascular Endurance Run (PACER) fitness test and equated PACER laps derived from One-Mile Run time (MR). A sample of 680 participants (324 boys and 356 girls) in Grades 7 through 12 completed both the PACER and the MR assessments. MR time was converted to PACER laps (PACER-MEQ) using previously developed conversion algorithms. Agreement between PACER and PACER-MEQ VO 2peak was examined using Pearson correlations, mean absolute percent error (MAPE), and equivalence testing procedures. Classification agreement based on health-related standards was examined using sensitivity, specificity, and Kappa statistics. Overall agreement between estimated VO 2peak obtained from the PACER and PACER-MEQ was high in boys, r(324) = .79, R 2 = .63, and moderate in girls, r(356) = .57, R 2 = .33. The MAPE for estimates obtained from PACER-MEQ was 10.3% and estimates were deemed equivalent to the PACER (43.1 ± 6.9 mL/kg/min vs. 44.6 ± 0.3 mL/kg/min). Classification agreement as illustrated by sensitivity and specificity ranged from 20.4% to 90.2% and was higher for classifications in the Healthy Fitness Zone (HFZ). Kappa statistics ranged from .14 to .51 and were also higher for the HFZ. Equated PACER laps can be used to obtain equivalent estimates of PACER VO 2peak in groups of adolescents, but some disparities can be found when students' scores are classified into the Needs Improvement Zone.
Cannon, Susan H.; Gartner, Joseph E.; Rupert, Michael G.; Michael, John A.
2003-01-01
These maps present preliminary assessments of the probability of debris-flow activity and estimates of peak discharges that can potentially be generated by debris-flows issuing from basins burned by the Piru, Simi and Verdale Fires of October 2003 in southern California in response to the 25-year, 10-year, and 2-year 1-hour rain storms. The probability maps are based on the application of a logistic multiple regression model that describes the percent chance of debris-flow production from an individual basin as a function of burned extent, soil properties, basin gradients and storm rainfall. The peak discharge maps are based on application of a multiple-regression model that can be used to estimate debris-flow peak discharge at a basin outlet as a function of basin gradient, burn extent, and storm rainfall. Probabilities of debris-flow occurrence for the Piru Fire range between 2 and 94% and estimates of debris flow peak discharges range between 1,200 and 6,640 ft3/s (34 to 188 m3/s). Basins burned by the Simi Fire show probabilities for debris-flow occurrence between 1 and 98%, and peak discharge estimates between 1,130 and 6,180 ft3/s (32 and 175 m3/s). The probabilities for debris-flow activity calculated for the Verdale Fire range from negligible values to 13%. Peak discharges were not estimated for this fire because of these low probabilities. These maps are intended to identify those basins that are most prone to the largest debris-flow events and provide information for the preliminary design of mitigation measures and for the planning of evacuation timing and routes.
DOT National Transportation Integrated Search
2016-06-01
This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, : 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years,...
Huizinga, Richard J.
2014-01-01
The rainfall-runoff pairs from the storm-specific GUH analysis were further analyzed against various basin and rainfall characteristics to develop equations to estimate the peak streamflow and flood volume based on a quantity of rainfall on the basin.
Estimating an Impedance-to-Flow Parameter for Flood Peak Prediction in Semi-Arid Watersheds 1997
USDA-ARS?s Scientific Manuscript database
The time of concentration equation used in Pima County, Arizona, includes a hydrologic parameter representing the impedance to flow for peak discharge estimation on small (<10 mi2) semiarid watersheds. The impedance-to-flow parameter is similar in function to the hydraulic Manning’s n roughness coef...
NASA Astrophysics Data System (ADS)
Lin, Hualiang; Ratnapradipa, Kendra; Wang, Xiaojie; Zhang, Yonghui; Xu, Yanjun; Yao, Zhenjiang; Dong, Guanghui; Liu, Tao; Clark, Jessica; Dick, Rebecca; Xiao, Jianpeng; Zeng, Weilin; Li, Xing; Qian, Zhengmin (Min); Ma, Wenjun
2017-07-01
Compared with daily mean concentration of air pollution, hourly peak concentration may be more directly relevant to the acute health effects due to the high concentration levels, however, few have analyzed the acute mortality effects of hourly peak levels of air pollution. We examined the associations of hourly peak concentration of fine particulate matter air pollution (PM2.5) with mortality in six cities in Pearl River Delta, China. We used generalized additive Poisson models to examine the associations with adjustment for potential confounders in each city. We further applied random-effects meta-analyses to estimate the regional overall effects. We further estimated the mortality burden attributable to hourly peak and daily mean PM2.5. We observed significant associations between hourly peak PM2.5 and mortality. Each 10 μg/m3 increase in 4-day averaged (lag03) hourly peak PM2.5 corresponded to a 0.9% [95% confidence interval (CI): 0.7%, 1.1%] increase in total mortality, 1.2% (95% CI: 1.0%, 1.5%) in cardiovascular mortality, and 0.7% (95% CI: 0.2%, 1.1%) in respiratory mortality. We observed a greater mortality burden using hourly peak PM2.5 than daily mean PM2.5, with an estimated 12915 (95% CI: 9922, 15949) premature deaths attributable to hourly peak PM2.5, and 7951 (95% CI: 5067, 10890) to daily mean PM2.5 in the Pearl River Delta (PRD) region during the study period. This study suggests that hourly peak PM2.5 might be one important risk factor of mortality in PRD region of China; the finding provides important information for future air pollution management and epidemiological studies.
Traveltime and longitudinal dispersion in Illinois streams
Graf, J.B.
1984-01-01
Twenty-seven measurements of traveltime and longitudinal dispersion in 10 Illinois streams provide data needed for estimating traveltime of peak concentration of a conservative solute, traveltime of the leading edge of a solute cloud, peak concentration resulting from a given quantity of solute, and passage time of solute past a given point on a stream for both measured and unmeasured streams. Traveltime of peak concentration and of the leading edge of the cloud are related to discharge at the downstream end of the reach, distance of travel, and the fraction of the time that discharge at a given location on the stream is equaled or exceeded. Peak concentration and passage time are best estimated from the relation of each to traveltime. In measured streams, dispersion efficiency is greater than that predicted by Fickian diffusion theory. The rate of decrease in peak concentration with traveltime is about equal to the rate of increase in passage time. Average velocity in a stream reach, given by the velocity of the center of solute mass in that reach, also can be estimated from an equation developed from measured values. (USGS)
2013-03-14
Dexamethasone increased maximal aerobic capacity compared with placebo. For example, pulse oximeter oxygen saturation at rest was significantly lower...IHE for 6 to 7 days reduces AMS by an estimated 20% and increases oxygen saturation levels by 1% to 3%. Several IHE protocols exist, but none have... oxygen kinetics (pɘ.05) and reduced ventilator equivalent for CO2 (pɘ.01); no significant difference in peak O2 saturation between groups
The 2014 May Camelopardalid Meteor Shower
NASA Technical Reports Server (NTRS)
Cooke, Bill; Moser, Danielle
2014-01-01
On May 24, 2014 Earth will encounter multiple streams of debris laid down by Comet 209P LINEAR. This will likely produce a new meteor shower, never before seen. Rates predicted to be from 100 to 1000 meteors per hour between 2 and 4 AM EDT, so we are dealing with a meteor outburst, potentially a storm. Peak rate of 200 per hour best current estimate. Difficult to calibrate models due to lack of past observations. Models indicate mm size particles in stream, so potential risk to Earth orbiting spacecraft.
Streamflow model of Wisconsin River for estimating flood frequency and volume
Krug, William R.; House, Leo B.
1980-01-01
The 100-year flood peak at Wisconsin Dells, computed from the simulated, regulated streamflow data for the period 1915-76, is 82,000 cubic feet per second, including the effects of all the reservoirs in the river system, as they are currently operated. It also includes the effects of Lakes Du Bay, Petenwell, and Castle Rock which are significant for spring floods but are insignificant for summer or fall floods because they are normally maintained nearly full in the summer and fall and have very little storage for floodwaters. (USGS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Field, Kevin G; Pape, Yann Le; Remec, Igor
A large fraction of light water reactor (LWR) construction utilizes concrete, including safety-related structures such as the biological shielding and containment building. Concrete is an inherently complex material, with the properties of concrete structures changing over their lifetime due to the intrinsic nature of concrete and influences from local environment. As concrete structures within LWRs age, the total neutron fluence exposure of the components, in particular the biological shield, can increase to levels where deleterious effects are introduced as a result of neutron irradiation. This work summarizes the current state of the art on irradiated concrete, including a review ofmore » the current literature and estimates the total neutron fluence expected in biological shields in typical LWR configurations. It was found a first-order mechanism for loss of mechanical properties of irradiated concrete is due to radiation-induced swelling of aggregates, which leads to volumetric expansion of the concrete. This phenomena is estimated to occur near the end of life of biological shield components in LWRs based on calculations of estimated peak neutron fluence in the shield after 80 years of operation.« less
Asquith, William H.; Roussel, Meghan C.
2009-01-01
Annual peak-streamflow frequency estimates are needed for flood-plain management; for objective assessment of flood risk; for cost-effective design of dams, levees, and other flood-control structures; and for design of roads, bridges, and culverts. Annual peak-streamflow frequency represents the peak streamflow for nine recurrence intervals of 2, 5, 10, 25, 50, 100, 200, 250, and 500 years. Common methods for estimation of peak-streamflow frequency for ungaged or unmonitored watersheds are regression equations for each recurrence interval developed for one or more regions; such regional equations are the subject of this report. The method is based on analysis of annual peak-streamflow data from U.S. Geological Survey streamflow-gaging stations (stations). Beginning in 2007, the U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, began a 3-year investigation concerning the development of regional equations to estimate annual peak-streamflow frequency for undeveloped watersheds in Texas. The investigation focuses primarily on 638 stations with 8 or more years of data from undeveloped watersheds and other criteria. The general approach is explicitly limited to the use of L-moment statistics, which are used in conjunction with a technique of multi-linear regression referred to as PRESS minimization. The approach used to develop the regional equations, which was refined during the investigation, is referred to as the 'L-moment-based, PRESS-minimized, residual-adjusted approach'. For the approach, seven unique distributions are fit to the sample L-moments of the data for each of 638 stations and trimmed means of the seven results of the distributions for each recurrence interval are used to define the station specific, peak-streamflow frequency. As a first iteration of regression, nine weighted-least-squares, PRESS-minimized, multi-linear regression equations are computed using the watershed characteristics of drainage area, dimensionless main-channel slope, and mean annual precipitation. The residuals of the nine equations are spatially mapped, and residuals for the 10-year recurrence interval are selected for generalization to 1-degree latitude and longitude quadrangles. The generalized residual is referred to as the OmegaEM parameter and represents a generalized terrain and climate index that expresses peak-streamflow potential not otherwise represented in the three watershed characteristics. The OmegaEM parameter was assigned to each station, and using OmegaEM, nine additional regression equations are computed. Because of favorable diagnostics, the OmegaEM equations are expected to be generally reliable estimators of peak-streamflow frequency for undeveloped and ungaged stream locations in Texas. The mean residual standard error, adjusted R-squared, and percentage reduction of PRESS by use of OmegaEM are 0.30log10, 0.86, and -21 percent, respectively. Inclusion of the OmegaEM parameter provides a substantial reduction in the PRESS statistic of the regression equations and removes considerable spatial dependency in regression residuals. Although the OmegaEM parameter requires interpretation on the part of analysts and the potential exists that different analysts could estimate different values for a given watershed, the authors suggest that typical uncertainty in the OmegaEM estimate might be about +or-0.1010. Finally, given the two ensembles of equations reported herein and those in previous reports, hydrologic design engineers and other analysts have several different methods, which represent different analytical tracks, to make comparisons of peak-streamflow frequency estimates for ungaged watersheds in the study area.
NASA Astrophysics Data System (ADS)
Sundar, Shyam; Mosqueira, J.; Alvarenga, A. D.; Sóñora, D.; Sefat, A. S.; Salem-Sugui, S., Jr.
2017-12-01
Isothermal magnetic field dependence of magnetization and magnetic relaxation measurements were performed for the H\\parallel {{c}} axis on a single crystal of Ba(Fe0.935 Co0.065)2As2 pnictide superconductor having T c = 21.7 K. The second magnetization peak (SMP) for each isothermal M(H) was observed in a wide temperature range from T c to the lowest temperature of measurement (2 K). The magnetic field dependence of relaxation rate R(H), showed a peak (H spt) between H on (onset of SMP in M(H)) and H p (peak field of SMP in M(H)), which is likely to be related to a vortex-lattice structural phase transition, as suggested in the literature for a similar sample. In addition, the magnetic relaxation measured for magnetic fields near H spt showed some noise, which might be the signature of the structural phase transition of the vortex lattice. Analysis of the magnetic relaxation data using Maley’s criterion and the collective pinning theory suggested that the SMP in the sample was due to the collective (elastic) to plastic creep crossover, which was also accompanied by a rhombic to square vortex lattice phase transition. Analysis of the pinning force density suggested a single dominating pinning mechanism in the sample, which did not showing the usual δ {l} and δ {T}{{c}} nature of pinning. The critical current density (J c), estimated using the Bean critical state model, was found to be 5 × 105 A cm- 2 at 2 K in the zero magnetic field limit. Surprisingly, the maximum of the pinning force density was not responsible for the maximum value of the critical current density in the sample.
Assessment of the magnetic field exposure due to the battery current of digital mobile phones.
Jokela, Kari; Puranen, Lauri; Sihvonen, Ari-Pekka
2004-01-01
Hand-held digital mobile phones generate pulsed magnetic fields associated with the battery current. The peak value and the waveform of the battery current were measured for seven different models of digital mobile phones, and the results were applied to compute approximately the magnetic flux density and induced currents in the phone-user's head. A simple circular loop model was used for the magnetic field source and a homogeneous sphere consisting of average brain tissue equivalent material simulated the head. The broadband magnetic flux density and the maximal induced current density were compared with the guidelines of ICNIRP using two various approaches. In the first approach the relative exposure was determined separately at each frequency and the exposure ratios were summed to obtain the total exposure (multiple-frequency rule). In the second approach the waveform was weighted in the time domain with a simple low-pass RC filter and the peak value was divided by a peak limit, both derived from the guidelines (weighted peak approach). With the maximum transmitting power (2 W) the measured peak current varied from 1 to 2.7 A. The ICNIRP exposure ratio based on the current density varied from 0.04 to 0.14 for the weighted peak approach and from 0.08 to 0.27 for the multiple-frequency rule. The latter values are considerably greater than the corresponding exposure ratios 0.005 (min) to 0.013 (max) obtained by applying the evaluation based on frequency components presented by the new IEEE standard. Hence, the exposure does not seem to exceed the guidelines. The computed peak magnetic flux density exceeded substantially the derived peak reference level of ICNIRP, but it should be noted that in a near-field exposure the external field strengths are not valid indicators of exposure. Currently, no biological data exist to give a reason for concern about the health effects of magnetic field pulses from mobile phones.
Mazzoni, Gianni; Chiaranda, Giorgio; Myers, Jonathan; Sassone, Biagio; Pasanisi, Giovanni; Mandini, Simona; Volpato, Stefano; Conconi, Francesco; Grazzi, Giovanni
2017-09-29
The walking speed maintained during a moderate 1-km treadmill walk (1k-TWT) has been demonstrated to be a valid tool for estimating peak oxygen uptake (VO2peak), and to be inversely related to long-term survival and hospitalization in outpatients with cardiovascular disease (CVD). We aimed to examine whether 500-m and 1-k moderate treadmill-walking tests equally estimate VO2peak in male outpatients with CVD. 142 clinically stable male outpatients with CVD, aged 34-92 years, referred to an exercise-based secondary prevention program, performed a moderate and perceptually-regulated (11-13/20 on the Borg scale) 1k- TWT. Age, height, weight, time to walk 500-m and the entire 1000-m, and the corresponding heart rates were entered into validated equations to estimate VO2peak. VO2peak estimated from the 500-m test was not different from that estimated from the 1k test (25.2±5.1 vs 25.1±5.2 mL/kg/min). The correlation coefficient between the two was 0.98. The slope and the intercept of the relationship between the 500-m and 1k tests were not different from the line of identity. Bland-Altman analysis demonstrated that 96% of the data points were within two standard deviations (from -1.9 to 1.7 mL/kg/min). The 500-m treadmill-walking test is a reliable method for estimating VO2peak in stable male outpatients with CVD. A shorter version of the test, 500-m, provides similar information as that from the original 1k test, but is more time efficient. These findings have practical implications in the context of transitioning patients from clinically based and supervised programs to fitness facilities or self-guided exercise programs.
Hurricane Mitch: Peak Discharge for Selected River Reachesin Honduras
Smith, Mark E.; Phillips, Jeffrey V.; Spahr, Norman E.
2002-01-01
Hurricane Mitch began as a tropical depression in the Caribbean Sea on 22 October 1998. By 26 October, Mitch had strengthened to a Category 5 storm as defined by the Saffir-Simpson Hurricane Scale (National Climate Data Center, 1999a), and on 27 October was threatening the northern coast of Honduras (fig. 1). After making landfall 2 days later (29 October), the storm drifted south and west across Honduras, wreaking destruction throughout the country before reaching the Guatemalan border on 31 October. According to the National Climate Data Center of the National Oceanic and Atmospheric Administration (National Climate Data Center, 1999b), Hurricane Mitch ranks among the five strongest storms on record in the Atlantic Basin in terms of its sustained winds, barometric pressure, and duration. Hurricane Mitch also was one of the worst Atlantic storms in terms of loss of life and property. The regionwide death toll was estimated to be more than 9,000; thousands of people were reported missing. Economic losses in the region were more than $7.5 billion (U.S. Agency for International Development, 1999). Honduras suffered the most widespread devastation during the storm. More than 5,000 deaths, and economic losses of more than $4 billion, were reported by the Government of Honduras. Honduran officials estimated that Hurricane Mitch destroyed 50 years of economic development. In addition to the human and economic losses, intense flooding and landslides scarred the Honduran landscape - hydrologic and geomorphologic processes throughout the country likely will be affected for many years. As part of the U.S. Government's response to the disaster, the U.S. Geological Survey (USGS) conducted post-flood measurements of peak discharge at 16 river sites throughout Honduras (fig. 2). Such measurements, termed 'indirect' measurements, are used to determine peak flows when direct measurements (using current meters or dye studies, for example) cannot be made. Indirect measurements of peak discharge are based on post-flood surveys of the river channel (observed high-water marks, cross sections, and hydraulic properties) and model computation of peak discharge. Determination of the flood peaks associated with Hurricane Mitch will help scientists understand the magnitude of this devastating hurricane. Peak-discharge information also is critical for the proper design of hydraulic structures (such as bridges and levees), delineation of theoretical flood boundaries, and development of stage-discharge relations at streamflow-monitoring sites.
Perry, Charles A.; Wolock, David M.; Artman, Joshua C.
2004-01-01
Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean flow, and peak-discharge frequency values determined at available gaging stations were used to interpolate the regression-estimated flows for the stream locations where available. Streamflow statistics for locations that had uncontrolled flow were interpolated using data from gaging stations weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled reaches of Kansas streams, the streamflow statistics were interpolated between gaging stations using only gaged data weighted by drainage area.
Increasing the Life of a Xenon-Ion Spacecraft Thruster
NASA Technical Reports Server (NTRS)
Goebel, Dan; Polk, James; Sengupta, Anita; Wirz, Richard
2007-01-01
A short document summarizes the redesign of a xenon-ion spacecraft thruster to increase its operational lifetime beyond a limit heretofore imposed by nonuniform ion-impact erosion of an accelerator electrode grid. A peak in the ion current density on the centerline of the thruster causes increased erosion in the center of the grid. The ion-current density in the NSTAR thruster that was the subject of this investigation was characterized by peak-to-average ratio of 2:1 and a peak-to-edge ratio of greater than 10:1. The redesign was directed toward distributing the same beam current more evenly over the entire grid andinvolved several modifications of the magnetic- field topography in the thruster to obtain more nearly uniform ionization. The net result of the redesign was to reduce the peak ion current density by nearly a factor of two, thereby halving the peak erosion rate and doubling the life of the thruster.
Peak oxygen consumption measured during the stair-climbing test in lung resection candidates.
Brunelli, Alessandro; Xiumé, Francesco; Refai, Majed; Salati, Michele; Di Nunzio, Luca; Pompili, Cecilia; Sabbatini, Armando
2010-01-01
The stair-climbing test is commonly used in the preoperative evaluation of lung resection candidates, but it is difficult to standardize and provides little physiologic information on the performance. To verify the association between the altitude and the V(O2peak) measured during the stair-climbing test. 109 consecutive candidates for lung resection performed a symptom-limited stair-climbing test with direct breath-by-breath measurement of V(O2peak) by a portable gas analyzer. Stepwise logistic regression and bootstrap analyses were used to verify the association of several perioperative variables with a V(O2peak) <15 ml/kg/min. Subsequently, multiple regression analysis was also performed to develop an equation to estimate V(O2peak) from stair-climbing parameters and other patient-related variables. 56% of patients climbing <14 m had a V(O2peak) <15 ml/kg/min, whereas 98% of those climbing >22 m had a V(O2peak) >15 ml/kg/min. The altitude reached at stair-climbing test resulted in the only significant predictor of a V(O2peak) <15 ml/kg/min after logistic regression analysis. Multiple regression analysis yielded an equation to estimate V(O2peak) factoring altitude (p < 0.0001), speed of ascent (p = 0.005) and body mass index (p = 0.0008). There was an association between altitude and V(O2peak) measured during the stair-climbing test. Most of the patients climbing more than 22 m are able to generate high values of V(O2peak) and can proceed to surgery without any additional tests. All others need to be referred for a formal cardiopulmonary exercise test. In addition, we were able to generate an equation to estimate V(O2peak), which could assist in streamlining the preoperative workup and could be used across different settings to standardize this test. Copyright (c) 2010 S. Karger AG, Basel.
Luczak, Susan E; Rosen, I Gary
2014-08-01
Transdermal alcohol sensor (TAS) devices have the potential to allow researchers and clinicians to unobtrusively collect naturalistic drinking data for weeks at a time, but the transdermal alcohol concentration (TAC) data these devices produce do not consistently correspond with breath alcohol concentration (BrAC) data. We present and test the BrAC Estimator software, a program designed to produce individualized estimates of BrAC from TAC data by fitting mathematical models to a specific person wearing a specific TAS device. Two TAS devices were worn simultaneously by 1 participant for 18 days. The trial began with a laboratory alcohol session to calibrate the model and was followed by a field trial with 10 drinking episodes. Model parameter estimates and fit indices were compared across drinking episodes to examine the calibration phase of the software. Software-generated estimates of peak BrAC, time of peak BrAC, and area under the BrAC curve were compared with breath analyzer data to examine the estimation phase of the software. In this single-subject design with breath analyzer peak BrAC scores ranging from 0.013 to 0.057, the software created consistent models for the 2 TAS devices, despite differences in raw TAC data, and was able to compensate for the attenuation of peak BrAC and latency of the time of peak BrAC that are typically observed in TAC data. This software program represents an important initial step for making it possible for non mathematician researchers and clinicians to obtain estimates of BrAC from TAC data in naturalistic drinking environments. Future research with more participants and greater variation in alcohol consumption levels and patterns, as well as examination of gain scheduling calibration procedures and nonlinear models of diffusion, will help to determine how precise these software models can become. Copyright © 2014 by the Research Society on Alcoholism.
Oceanic Lightning versus Continental Lightning: VLF Peak Current Discrepancies
NASA Astrophysics Data System (ADS)
Dupree, N. A., Jr.; Moore, R. C.
2015-12-01
Recent analysis of the Vaisala global lightning data set GLD360 suggests that oceanic lightning tends to exhibit larger peak currents than continental lightning (lightning occurring over land). The GLD360 peak current measurement is derived from distant measurements of the electromagnetic fields emanated during the lightning flash. Because the GLD360 peak current measurement is a derived quantity, it is not clear whether the actual peak currents of oceanic lightning tend to be larger, or whether the resulting electromagnetic field strengths tend to be larger. In this paper, we present simulations of VLF signal propagation in the Earth-ionosphere waveguide to demonstrate that the peak field values for oceanic lightning can be significantly stronger than for continental lightning. Modeling simulations are performed using the Long Wave Propagation Capability (LWPC) code to directly evaluate the effect of ground conductivity on VLF signal propagation in the 5-15 kHz band. LWPC is an inherently narrowband propagation code that has been modified to predict the broadband response of the Earth-Ionosphere waveguide to an impulsive lightning flash while preserving the ability of LWPC to account for an inhomogeneous waveguide. Furthermore, we evaluate the effect of return stroke speed on these results.
Waltemeyer, Scott D.
2006-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction for a peak discharge have a recurrence interval of 100-years for region 8 was 53 percent (average) for the 100-year flood. The average standard of prediction, which includes average sampling error and average standard error of regression, ranged from 45 to 83 percent for the 100-year flood. Estimated standard error of prediction for a hybrid method for region 11 was large in the 1997 investigation. No distinction of floods produced from a high-elevation region was presented in the 1997 investigation. Overall, the equations based on generalized least-squares regression techniques are considered to be more reliable than those in the 1997 report because of the increased length of record and improved GIS method. Techniques for transferring flood-frequency relations to ungaged sites on the same stream can be estimated at an ungaged site by a direct application of the regional regression equation or at an ungaged site on a stream that has a gaging station upstream or downstream by using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region.
Optically controlled resonant tunneling in a double-barrier diode
NASA Astrophysics Data System (ADS)
Kan, S. C.; Wu, S.; Sanders, S.; Griffel, G.; Yariv, A.
1991-03-01
The resonant tunneling effect is optically enhanced in a GaAs/GaAlAs double-barrier structure that has partial lateral current confinement. The peak current increases and the valley current decreases simultaneously when the device surface is illuminated, due to the increased conductivity of the top layer of the structure. The effect of the lateral current confinement on the current-voltage characteristic of a double-barrier resonant tunneling structure was also studied. With increased lateral current confinement, the peak and valley current decrease at a different rate such that the current peak-to-valley ratio increases up to three times. The experimental results are explained by solving the electrostatic potential distribution in the structure using a simple three-layer model.
Pre-flare association of magnetic fields and millimeter-wave radio emission
NASA Technical Reports Server (NTRS)
Mayfield, E. B.; White, K. P., III
1976-01-01
Observations of radio emission at 3.3 mm wavelength associated with magnetic fields in active regions are reported. Results of more than 200 regions during the years 1967-1968 show a strong correlation between peak enhanced millimeter emission, total flux of the longitudinal component of photospheric magnetic fields and the number of flares produced during transit of active regions. For magnetic flux greater than (10 to the 21st power) maxwells flares will occur and for flux of (10 to the 23rd power) maxwells the sum of the H-alpha flare importance numbers is about 40. The peak millimeter enhancement increases with magnetic flux for regions which subsequently flared. Estimates of the magnetic energy available and the correlation with flare production indicate that the photospheric fields and probably chromospheric currents are responsible for the observed pre-flare heating and provide the energy of flares.
Hejl, H.R.
1989-01-01
The precipitation-runoff modeling system was applied to the 8.21 sq-mi drainage area of the Ah-shi-sle-pah Wash watershed in northwestern New Mexico. The calibration periods were May to September of 1981 and 1982, and the verification period was May to September 1983. Twelve storms were available for calibration and 8 storms were available for verification. For calibration A (hydraulic conductivity estimated from onsite data and other storm-mode parameters optimized), the computed standard error of estimate was 50% for runoff volumes and 72% of peak discharges. Calibration B included hydraulic conductivity in the optimization, which reduced the standard error of estimate to 28 % for runoff volumes and 50% for peak discharges. Optimized values for hydraulic conductivity resulted in reductions from 1.00 to 0.26 in/h and 0.20 to 0.03 in/h for the 2 general soils groups in the calibrations. Simulated runoff volumes using 7 of 8 storms occurring during the verification period had a standard error of estimate of 40% for verification A and 38% for verification B. Simulated peak discharge had a standard error of estimate of 120% for verification A and 56% for verification B. Including the eighth storm which had a relatively small magnitude in the verification analysis more than doubled the standard error of estimating volumes and peaks. (USGS)
Modeling Earth's surface topography: decomposition of the static and dynamic components
NASA Astrophysics Data System (ADS)
Guerri, M.; Cammarano, F.; Tackley, P. J.
2017-12-01
Isolating the portion of topography supported by mantle convection, the so-called dynamic topography, would give us precious information on vigor and style of the convection itself. Contrasting results on the estimate of dynamic topography motivate us to analyse the sources of uncertainties affecting its modeling. We obtain models of mantle and crust density, leveraging on seismic and mineral physics constraints. We use the models to compute isostatic topography and residual topography maps. Estimates of dynamic topography and associated synthetic geoid are obtained by instantaneous mantle flow modeling. We test various viscosity profiles and 3D viscosity distributions accounting for inferred lateral variations in temperature. We find that the patterns of residual and dynamic topography are robust, with an average correlation coefficient of 0.74 and 0.71, respectively. The amplitudes are however poorly constrained. For the static component, the considered lithospheric mantle density models result in topographies that differ, on average, 720 m, with peaks reaching 1.7 km. The crustal density models produce variations in isostatic topography averaging 350 m, with peaks of 1 km. For the dynamic component, we obtain peak-to-peak topography amplitude exceeding 3 km for all the tested mantle density and viscosity models. Such values of dynamic topography produce geoid undulations that are not in agreement with observations. Assuming chemical heterogeneities in the lower mantle, in correspondence with the LLSVPs (Large Low Shear wave Velocity Provinces), helps to decrease the amplitudes of dynamic topography and geoid, but reduces the correlation between synthetic and observed geoid. The correlation coefficients between the residual and dynamic topography maps is always less than 0.55. In general, our results indicate that, i) current knowledge of crust density, mantle density and mantle viscosity is still limited, ii) it is important to account for all the various sources of uncertainties when computing static and dynamic topography. In conclusion, a multidisciplinary approach, which involves multiple geophysics observations and constraints from mineral physics, is necessary for obtaining robust density models and, consequently, for properly estimating the dynamic topography.
Kretzschmar, Mirjam; Teunis, Peter F. M.; Pebody, Richard G.
2010-01-01
Background Despite large-scale vaccination programmes, pertussis has remained endemic in all European countries and has been on the rise in many countries in the last decade. One of the reasons that have been discussed for the failure of vaccination to eliminate the disease is continued circulation of the pathogen Bordetella pertussis by mostly asymptomatic and mild infections in adolescents and adults. To understand the impact of asymptomatic and undiagnosed infection on the transmission dynamics of pertussis we analysed serological data from five European countries in combination with information about social contact patterns from five of those countries to estimate incidence and reproduction numbers. Methods and Findings We compared two different methods for estimating incidence from individual data on IgG pertussis toxin (PT) titres. One method combines the cross-sectional surveys of titres with longitudinal information about the distribution of amplitude and decay rate of titres in a back-calculation approach. The second method uses age-dependent contact matrices and cross-sectional surveys of IgG PT titres to estimate a next generation matrix for pertussis transmission among age groups. The next generation approach allows for computation of basic reproduction numbers for five European countries. Our main findings are that the seroincidence of infections as estimated with the first method in all countries lies between 1% and 6% per annum with a peak in the adolescent age groups and a second lower peak in young adults. The incidence of infections as estimated by the second method lies slightly lower with ranges between 1% and 4% per annum. There is a remarkably good agreement of the results obtained with the two methods. The basic reproduction numbers are similar across countries at around 5.5. Conclusions Vaccination with currently used vaccines cannot prevent continued circulation and reinfection with pertussis, but has shifted the bulk of infections to adolescents and adults. If a vaccine conferring lifelong protection against clinical and subclinical infection were available pertussis could be eliminated. Currently, continuing circulation of the pathogen at a subclinical level provides a refuge for the pathogen in which it can evolve and adjust to infect vaccinated populations. Please see later in the article for the Editors' Summary PMID:20585374
The Lumbar Lordosis in Males and Females, Revisited.
Hay, Ori; Dar, Gali; Abbas, Janan; Stein, Dan; May, Hila; Masharawi, Youssef; Peled, Nathan; Hershkovitz, Israel
2015-01-01
Whether differences exist in male and female lumbar lordosis has been debated by researchers who are divided as to the nature of variations in the spinal curve, their origin, reasoning, and implications from a morphological, functional and evolutionary perspective. Evaluation of the spinal curvature is constructive in understanding the evolution of the spine, as well as its pathology, planning of surgical procedures, monitoring its progression and treatment of spinal deformities. The aim of the current study was to revisit the nature of lumbar curve in males and females. Our new automated method uses CT imaging of the spine to measure lumbar curvature in males and females. The curves extracted from 158 individuals were based on the spinal canal, thus avoiding traditional pitfalls of using bone features for curve estimation. The model analysis was carried out on the entire curve, whereby both local and global descriptors were examined in a single framework. Six parameters were calculated: segment length, curve length, curvedness, lordosis peak location, lordosis cranial peak height, and lordosis caudal peak height. Compared to males, the female spine manifested a statistically significant greater curvature, a caudally located lordotic peak, and greater cranial peak height. As caudal peak height is similar for males and females, the illusion of deeper lordosis among females is due partially to the fact that the upper part of the female lumbar curve is positioned more dorsally (more backwardly inclined). Males and females manifest different lumbar curve shape, yet similar amount of inward curving (lordosis). The morphological characteristics of the female spine were probably developed to reduce stress on the vertebral elements during pregnancy and nursing.
NASA Astrophysics Data System (ADS)
Shin, Sunhae; Rok Kim, Kyung
2015-06-01
In this paper, we propose a novel multiple negative differential resistance (NDR) device with ultra-high peak-to-valley current ratio (PVCR) over 106 by combining tunnel diode with a conventional MOSFET, which suppresses the valley current with transistor off-leakage level. Band-to-band tunneling (BTBT) in tunnel junction provides the first peak, and the second peak and valley are generated from the suppression of diffusion current in tunnel diode by the off-state MOSFET. The multiple NDR curves can be controlled by doping concentration of tunnel junction and the threshold voltage of MOSFET. By using complementary multiple NDR devices, five-state memory is demonstrated only with six transistors.
Addressing Postsecondary Access for Undocumented Students. ECS Education Trends
ERIC Educational Resources Information Center
Anderson, Lexi
2015-01-01
In 2012, there were an estimated 11.2 million undocumented individuals living in the United States. The peak of unauthorized immigrant population occurred in 2007 with 12.2 million, a stark rise from original estimates of 3.5 million in 1990. Although down from its peak, a sizeable and stable population of unauthorized individuals resides in the…
NASA Astrophysics Data System (ADS)
Felder, Guido; Zischg, Andreas; Weingartner, Rolf
2015-04-01
Estimating peak discharges with very low probabilities is still accompanied by large uncertainties. Common estimation methods are usually based on extreme value statistics applied to observed time series or to hydrological model outputs. However, such methods assume the system to be stationary and do not specifically consider non-stationary effects. Observed time series may exclude events where peak discharge is damped by retention effects, as this process does not occur until specific thresholds, possibly beyond those of the highest measured event, are exceeded. Hydrological models can be complemented and parameterized with non-linear functions. However, in such cases calibration depends on observed data and non-stationary behaviour is not deterministically calculated. Our study discusses the option of considering retention effects on extreme peak discharges by coupling hydrological and hydraulic models. This possibility is tested by forcing the semi-distributed deterministic hydrological model PREVAH with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). The procedure ensures that the estimated extreme peak discharge does not exceed the physical limit given by the riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered.
Gorcsan, J; Snow, F R; Paulsen, W; Nixon, J V
1991-03-01
A completely noninvasive method for estimating left atrial pressure in patients with congestive heart failure and mitral regurgitation has been devised with the use of continuous-wave Doppler echocardiography and brachial sphygmomanometry. Of 46 patients studied with mitral regurgitation, 35 (76%) had jets with distinct Doppler spectral envelopes recorded. The peak ventriculoatrial gradient was obtained by measuring peak mitral regurgitant velocity in systole and using the modified Bernoulli equation. This gradient was then subtracted from peak brachial systolic blood pressure, an estimate of left ventricular systolic pressure, to yield left atrial pressure (left atrial pressure = systolic blood pressure - mitral regurgitant pressure gradient). Noninvasive estimates of left atrial pressure from 35 patients were plotted against simultaneous recordings of mean pulmonary capillary wedge pressure resulting in the correlation y = 0.88x + 3.3, r = 0.88, standard error of estimate = +/- 4 mm Hg (p less than 0.001). Therefore, continuous-wave Doppler echocardiography and sphygmomanometry may be used in selected patients with congestive heart failure and mitral regurgitation for noninvasive estimation of left atrial pressure.
DOA estimation of noncircular signals for coprime linear array via locally reduced-dimensional Capon
NASA Astrophysics Data System (ADS)
Zhai, Hui; Zhang, Xiaofei; Zheng, Wang
2018-05-01
We investigate the issue of direction of arrival (DOA) estimation of noncircular signals for coprime linear array (CLA). The noncircular property enhances the degree of freedom and improves angle estimation performance, but it leads to a more complex angle ambiguity problem. To eliminate ambiguity, we theoretically prove that the actual DOAs of noncircular signals can be uniquely estimated by finding the coincide results from the two decomposed subarrays based on the coprimeness. We propose a locally reduced-dimensional (RD) Capon algorithm for DOA estimation of noncircular signals for CLA. The RD processing is used in the proposed algorithm to avoid two dimensional (2D) spectral peak search, and coprimeness is employed to avoid the global spectral peak search. The proposed algorithm requires one-dimensional locally spectral peak search, and it has very low computational complexity. Furthermore, the proposed algorithm needs no prior knowledge of the number of sources. We also derive the Crámer-Rao bound of DOA estimation of noncircular signals in CLA. Numerical simulation results demonstrate the effectiveness and superiority of the algorithm.
NASA Astrophysics Data System (ADS)
da Silva, C. L.; Merrill, R. A.; Pasko, V. P.
2015-12-01
A significant portion of the in-cloud lightning development is observed as a series of initial breakdown pulses (IBPs) that are characterized by an abrupt change in the electric field at a remote sensor. Recent experimental and theoretical studies have attributed this process to the stepwise elongation of an initial lightning leader inside the thunderstorm [da Silva and Pasko, JGR, 120, 4989-5009, 2015, and references therein]. Attempts to visually observe these events are hampered due to the fact that clouds are opaque to optical radiation. Due to this reason, throughout the last decade, a number of researchers have used the so-called transmission line models (also commonly referred to as engineering models), widely employed for return stroke simulations, to simulate the waveshapes of IBPs, and also of narrow bipolar events. The transmission line (TL) model approach is to prescribe the source current dynamics in a certain manner to match the measured E-field change waveform, with the purpose of retrieving key information about the source, such as its height, peak current, size, speed of charge motion, etc. Although the TL matching method is not necessarily physics-driven, the estimated source characteristics can give insights on the dominant length- and time-scales, as well as, on the energetics of the source. This contributes to better understanding of the environment where the onset and early stages of lightning development takes place.In the present work, we use numerical modeling to constrain the number of source parameters that can be confidently inferred from the observed far-field IBP waveforms. We compare different modified TL models (i.e., with different attenuation behaviors) to show that they tend to produce similar waveforms in conditions where the channel is short. We also demonstrate that it is impossible to simultaneously retrieve the speed of source current propagation and channel length from an observed IBP waveform, in contrast to what has been previously done in the literature. Finally, we demonstrate that the simulated field-to-current conversion factor in IBP sources can vary by more than one order of magnitude, making peak current estimates for intracloud lightning processes a challenging task.
Chigidi, Esther; Lungu, Edward M
2009-07-01
We formulate an HIV/AIDS deterministic model which incorporates differential infectivity and disease progression for treatment-naive and treatment-experienced HIV/AIDS infectives. To illustrate our model, we have applied it to estimate adult HIV prevalence, the HIV population, the number of new infectives and the number of AIDS deaths for Botswana for the period 1984 to 2012. It is found that the prevalence peaked in the year 2000 and the HIV population is now decreasing. We have also found that under the current conditions, the reproduction number is Rc approximately 13, which is less than the 2004 estimate of Rc approximately equal 4 by [11] and [13]. The results in this study suggest that the HAART program has yielded positive results for Botswana.
Hubbert's Peak: the Impending World oil Shortage
NASA Astrophysics Data System (ADS)
Deffeyes, K. S.
2004-12-01
Global oil production will probably reach a peak sometime during this decade. After the peak, the world's production of crude oil will fall, never to rise again. The world will not run out of energy, but developing alternative energy sources on a large scale will take at least 10 years. The slowdown in oil production may already be beginning; the current price fluctuations for crude oil and natural gas may be the preamble to a major crisis. In 1956, the geologist M. King Hubbert predicted that U.S. oil production would peak in the early 1970s.1 Almost everyone, inside and outside the oil industry, rejected Hubbert's analysis. The controversy raged until 1970, when the U.S. production of crude oil started to fall. Hubbert was right. Around 1995, several analysts began applying Hubbert's method to world oil production, and most of them estimate that the peak year for world oil will be between 2004 and 2008. These analyses were reported in some of the most widely circulated sources: Nature, Science, and Scientific American.2 None of our political leaders seem to be paying attention. If the predictions are correct, there will be enormous effects on the world economy. Even the poorest nations need fuel to run irrigation pumps. The industrialized nations will be bidding against one another for the dwindling oil supply. The good news is that we will put less carbon dioxide into the atmosphere. The bad news is that my pickup truck has a 25-gallon tank.
InP tunnel junctions for InP/InGaAs tandem solar cells
NASA Technical Reports Server (NTRS)
Vilela, Mauro F.; Freundlich, Alex; Renaud, P.; Medelci, N.; Bensaoula, A.
1996-01-01
We report, for the first time, an epitaxially grown InP p(+)/n(++) tunnel junction. A diode with peak current densities up to 1600 A/cm and maximum specific resistivities (Vp/Ip - peak voltage to peak current ratio) in the range of 10(exp -4)Omega cm(exp 2) is obtained. This peak current density is comparable to the highest results previously reported for lattice matched In(0.53)Ga(0.47)As tunnel junctions. Both results were obtained using chemical beam epitaxy (CBE). In this paper we discuss the electrical characteristics of these tunnel diodes and how the growth conditions influence them.
InP Tunnel Junctions for InP/InGaAs Tandem Solar Cells
NASA Technical Reports Server (NTRS)
Vilela, M. F.; Medelci, N.; Bensaoula, A.; Freundlich, A.; Renaud, P.
1995-01-01
We report, for the first time, an epitaxially grown InP p(+)/n(++) tunnel junction. A diode with peak current densities up to 1600 Al/sq cm and maximum specific resistivities (Vp/lp - peak voltage to peak current ratio) in the range of 10(exp -4)Om sq cm is obtained. This peak current density is comparable to the highest results previously reported for lattice matched In(0.53)Ga(0.47)As tunnel junctions. Both results were obtained using chemical beam epitaxy (CBE). In this paper we discuss the electrical characteristics of these tunnel diodes and how the growth conditions influence them.
Growth and characterization of high current density, high-speed InAs/AlSb resonant tunneling diodes
NASA Technical Reports Server (NTRS)
Soderstrom, J. R.; Brown, E. R.; Parker, C. D.; Mahoney, L. J.; Yao, J. Y.
1991-01-01
InAs/AlSb double-barrier resonant tunneling diodes with peak current densities up to 370,000 A/sq cm and high peak-to-valley current ratios of 3.2 at room temperature have been fabricated. The peak current density is well-explained by a stationary-state transport model with the two-band envelope function approximation. The valley current density predicted by this model is less than the experimental value by a factor that is typical of the discrepancy found in other double-barrier structures. It is concluded that threading dislocations are largely inactive in the resonant tunneling process.
Avilés Lucas, P; Dance, D R; Castellano, I A; Vañó, E
2005-01-01
The purpose of this work was to develop a method for estimating the patient peak entrance surface air kerma from measurements using a pencil ionisation chamber on dosimetry phantoms exposed in a computed tomography (CT) scanner. The method described is especially relevant for CT fluoroscopy and CT perfusion procedures where the peak entrance surface air kerma is the risk-related quantity of primary concern. Pencil ionisation chamber measurements include scattered radiation, which is outside the primary radiation field, and that must be subtracted in order to derive the peak entrance surface air kerma. A Monte Carlo computer model has therefore been used to calculate correction factors, which may be applied to measurements of the CT dose index obtained using a pencil ionisation chamber in order to estimate the peak entrance surface air kerma. The calculations were made for beam widths of 5, 7, 10 and 20 mm, for seven positions of the phantom, and for the geometry of a GE HiSpeed CT/i scanner. The program was validated by comparing measurements and calculations of CTDI for various vertical positions of the phantom and by directly estimating the peak ESAK using the program. Both validations showed agreement within statistical uncertainties (standard deviation of 2.3% or less). For the GE machine, the correction factors vary by approximately 10% with slice width for a fixed phantom position, being largest for the 20 mm beam width, and at that beam width range from 0.87 when the phantom surface is at the isocentre to 1.23 when it is displaced vertically by 24 cm.
Glassman, E Katelyn; Hughes, Michelle L
2013-01-01
Current cochlear implants (CIs) have telemetry capabilities for measuring the electrically evoked compound action potential (ECAP). Neural Response Telemetry (Cochlear) and Neural Response Imaging (Advanced Bionics [AB]) can measure ECAP responses across a range of stimulus levels to obtain an amplitude growth function. Software-specific algorithms automatically mark the leading negative peak, N1, and the following positive peak/plateau, P2, and apply linear regression to estimate ECAP threshold. Alternatively, clinicians may apply expert judgments to modify the peak markers placed by the software algorithms, or use visual detection to identify the lowest level yielding a measurable ECAP response. The goals of this study were to: (1) assess the variability between human and computer decisions for (a) marking N1 and P2 and (b) determining linear-regression threshold (LRT) and visual-detection threshold (VDT); and (2) compare LRT and VDT methods within and across human- and computer-decision methods. ECAP amplitude-growth functions were measured for three electrodes in each of 20 ears (10 Cochlear Nucleus® 24RE/CI512, and 10 AB CII/90K). LRT, defined as the current level yielding an ECAP with zero amplitude, was calculated for both computer- (C-LRT) and human-picked peaks (H-LRT). VDT, defined as the lowest level resulting in a measurable ECAP response, was also calculated for both computer- (C-VDT) and human-picked peaks (H-VDT). Because Neural Response Imaging assigns peak markers to all waveforms but does not include waveforms with amplitudes less than 20 μV in its regression calculation, C-VDT for AB subjects was defined as the lowest current level yielding an amplitude of 20 μV or more. Overall, there were significant correlations between human and computer decisions for peak-marker placement, LRT, and VDT for both manufacturers (r = 0.78-1.00, p < 0.001). For Cochlear devices, LRT and VDT correlated equally well for both computer- and human-picked peaks (r = 0.98-0.99, p < 0.001), which likely reflects the well-defined Neural Response Telemetry algorithm and the lower noise floor in the 24RE and CI512 devices. For AB devices, correlations between LRT and VDT for both peak-picker methods were weaker than for Cochlear devices (r = 0.69-0.85, p < 0.001), which likely reflect the higher noise floor of the system. Disagreement between computer and human decisions regarding the presence of an ECAP response occurred for 5 % of traces for Cochlear devices and 2.1 % of traces for AB devices. Results indicate that human and computer peak-picking methods can be used with similar accuracy for both Cochlear and AB devices. Either C-VDT or C-LRT can be used with equal confidence for Cochlear 24RE and CI512 recipients because both methods are strongly correlated with human decisions. However, for AB devices, greater variability exists between different threshold-determination methods. This finding should be considered in the context of using ECAP measures to assist with programming CIs.
"Ersatz" and "hybrid" NMR spectral estimates using the filter diagonalization method.
Ridge, Clark D; Shaka, A J
2009-03-12
The filter diagonalization method (FDM) is an efficient and elegant way to make a spectral estimate purely in terms of Lorentzian peaks. As NMR spectral peaks of liquids conform quite well to this model, the FDM spectral estimate can be accurate with far fewer time domain points than conventional discrete Fourier transform (DFT) processing. However, noise is not efficiently characterized by a finite number of Lorentzian peaks, or by any other analytical form, for that matter. As a result, noise can affect the FDM spectrum in different ways than it does the DFT spectrum, and the effect depends on the dimensionality of the spectrum. Regularization to suppress (or control) the influence of noise to give an "ersatz", or EFDM, spectrum is shown to sometimes miss weak features, prompting a more conservative implementation of filter diagonalization. The spectra obtained, called "hybrid" or HFDM spectra, are acquired by using regularized FDM to obtain an "infinite time" spectral estimate and then adding to it the difference between the DFT of the data and the finite time FDM estimate, over the same time interval. HFDM has a number of advantages compared to the EFDM spectra, where all features must be Lorentzian. They also show better resolution than DFT spectra. The HFDM spectrum is a reliable and robust way to try to extract more information from noisy, truncated data records and is less sensitive to the choice of regularization parameter. In multidimensional NMR of liquids, HFDM is a conservative way to handle the problems of noise, truncation, and spectral peaks that depart significantly from the model of a multidimensional Lorentzian peak.
Annual peak streamflow and ancillary data for small watersheds in central and western Texas
Harwell, Glenn R.; Asquith, William H.
2011-01-01
Estimates of annual peak-streamflow frequency are needed for flood-plain management, assessment of flood risk, and design of structures, such as roads, bridges, culverts, dams, and levees. Regional regression equations have been developed and are used extensively to estimate annual peak-streamflow frequency for ungaged sites in natural (unregulated and rural or nonurbanized) watersheds in Texas (Asquith and Slade, 1997; Asquith and Thompson, 2008; Asquith and Roussel, 2009). The most recent regional regression equations were developed by using data from 638 Texas streamflow-gaging stations throughout the State with eight or more years of data by using drainage area, channel slope, and mean annual precipitation as predictor variables (Asquith and Roussel, 2009). However, because of a lack of sufficient historical streamflow data from small, rural watersheds in certain parts of the State (central and western), substantial uncertainity exists when using the regional regression equations for the purpose of estimating annual peak-streamflow frequency.
NASA Astrophysics Data System (ADS)
Zhou, Ping; Zev Rymer, William
2004-12-01
The number of motor unit action potentials (MUAPs) appearing in the surface electromyogram (EMG) signal is directly related to motor unit recruitment and firing rates and therefore offers potentially valuable information about the level of activation of the motoneuron pool. In this paper, based on morphological features of the surface MUAPs, we try to estimate the number of MUAPs present in the surface EMG by counting the negative peaks in the signal. Several signal processing procedures are applied to the surface EMG to facilitate this peak counting process. The MUAP number estimation performance by this approach is first illustrated using the surface EMG simulations. Then, by evaluating the peak counting results from the EMG records detected by a very selective surface electrode, at different contraction levels of the first dorsal interosseous (FDI) muscles, the utility and limitations of such direct peak counts for MUAP number estimation in surface EMG are further explored.
Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.
Traveltime and longitudinal dispersion in Illinois streams
Graf, Julia B.
1986-01-01
Twenty-seven measurements of traveltime and longitudinal dispersion in 10 Illinois streams made from 1975 to 1982 provide data needed for estimating traveltime of peak concentration of a conservative solute, traveltime of the leading edge of a solute cloud, peak concentration resulting from injection of a given quantity of solute, and passage time of solute past a given point on a stream. These four variables can be estimated graphically for each stream from distance of travel and either discharge at the downstream end of the reach or flow-duration frequency. From equations developed from field measurements, the traveltime and dispersion characteristics also can be estimated for other unregulated streams in Illinois that have drainage areas less than about 1,500 square miles. For unmeasured streams, traveltime of peak concentration and of the leading edge of the cloud are related to discharge at the downstream end of the reach and to distance of travel. For both measured and unmeasured streams, peak concentration and passage time are best estimated from the relation of each to traveltime. In measured streams, dispersion efficiency is greater than that predicted by Fickian diffusion theory. The rate of decrease in peak concentration with traveltime is about equal to the rate of increase in passage time. Average velocity in a stream reach, given by the velocity of the center of solute mass in that reach, can be estimated from an equation developed from measured values. The equation relates average reach velocity to discharge at the downstream end of the reach. Average reach velocities computed for 9 of the 10 streams from available equations that are based on hydraulic-geometry relations are high relative to measured values. The estimating equation developed from measured velocities provides estimates of average reach velocity that are closer to measured velocities than are those computed using equations developed from hydraulic-geometry relations.
Using Caspar Creek flow records to test peak flow estimation methods applicable to crossing design
Peter H. Cafferata; Leslie M. Reid
2017-01-01
Long-term flow records from sub-watersheds in the Caspar Creek Experimental Watersheds were used to test the accuracy of four methods commonly used to estimate peak flows in small forested watersheds: the Rational Method, the updated USGS Magnitude and Frequency Method, flow transference methods, and the NRCS curve number method. Comparison of measured and calculated...
Time-Domain Receiver Function Deconvolution using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Moreira, L. P.
2017-12-01
Receiver Functions (RF) are well know method for crust modelling using passive seismological signals. Many different techniques were developed to calculate the RF traces, applying the deconvolution calculation to radial and vertical seismogram components. A popular method used a spectral division of both components, which requires human intervention to apply the Water Level procedure to avoid instabilities from division by small numbers. One of most used method is an iterative procedure to estimate the RF peaks and applying the convolution with vertical component seismogram, comparing the result with the radial component. This method is suitable for automatic processing, however several RF traces are invalid due to peak estimation failure.In this work it is proposed a deconvolution algorithm using Genetic Algorithm (GA) to estimate the RF peaks. This method is entirely processed in the time domain, avoiding the time-to-frequency calculations (and vice-versa), and totally suitable for automatic processing. Estimated peaks can be used to generate RF traces in a seismogram format for visualization. The RF trace quality is similar for high magnitude events, although there are less failures for RF calculation of smaller events, increasing the overall performance for high number of events per station.
Probable flood predictions in ungauged coastal basins of El Salvador
Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.
2008-01-01
A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.
Allstadt, Kate E.; Thompson, Eric M.; Hearne, Mike; Nowicki Jessee, M. Anna; Zhu, J.; Wald, David J.; Tanyas, Hakan
2017-01-01
The U.S. Geological Survey (USGS) has made significant progress toward the rapid estimation of shaking and shakingrelated losses through their Did You Feel It? (DYFI), ShakeMap, ShakeCast, and PAGER products. However, quantitative estimates of the extent and severity of secondary hazards (e.g., landsliding, liquefaction) are not currently included in scenarios and real-time post-earthquake products despite their significant contributions to hazard and losses for many events worldwide. We are currently running parallel global statistical models for landslides and liquefaction developed with our collaborators in testing mode, but much work remains in order to operationalize these systems. We are expanding our efforts in this area by not only improving the existing statistical models, but also by (1) exploring more sophisticated, physics-based models where feasible; (2) incorporating uncertainties; and (3) identifying and undertaking research and product development to provide useful landslide and liquefaction estimates and their uncertainties. Although our existing models use standard predictor variables that are accessible globally or regionally, including peak ground motions, topographic slope, and distance to water bodies, we continue to explore readily available proxies for rock and soil strength as well as other susceptibility terms. This work is based on the foundation of an expanding, openly available, case-history database we are compiling along with historical ShakeMaps for each event. The expected outcome of our efforts is a robust set of real-time secondary hazards products that meet the needs of a wide variety of earthquake information users. We describe the available datasets and models, developments currently underway, and anticipated products.
Review and Analysis of Peak Tracking Techniques for Fiber Bragg Grating Sensors
2017-01-01
Fiber Bragg Grating (FBG) sensors are among the most popular elements for fiber optic sensor networks used for the direct measurement of temperature and strain. Modern FBG interrogation setups measure the FBG spectrum in real-time, and determine the shift of the Bragg wavelength of the FBG in order to estimate the physical parameters. The problem of determining the peak wavelength of the FBG from a spectral measurement limited in resolution and noise, is referred as the peak-tracking problem. In this work, the several peak-tracking approaches are reviewed and classified, outlining their algorithmic implementations: the methods based on direct estimation, interpolation, correlation, resampling, transforms, and optimization are discussed in all their proposed implementations. Then, a simulation based on coupled-mode theory compares the performance of the main peak-tracking methods, in terms of accuracy and signal to noise ratio resilience. PMID:29039804
Amplification of postwildfire peak flow by debris
NASA Astrophysics Data System (ADS)
Kean, J. W.; McGuire, L. A.; Rengers, F. K.; Smith, J. B.; Staley, D. M.
2016-08-01
In burned steeplands, the peak depth and discharge of postwildfire runoff can substantially increase from the addition of debris. Yet methods to estimate the increase over water flow are lacking. We quantified the potential amplification of peak stage and discharge using video observations of postwildfire runoff, compiled data on postwildfire peak flow (Qp), and a physically based model. Comparison of flood and debris flow data with similar distributions in drainage area (A) and rainfall intensity (I) showed that the median runoff coefficient (C = Qp/AI) of debris flows is 50 times greater than that of floods. The striking increase in Qp can be explained using a fully predictive model that describes the additional flow resistance caused by the emergence of coarse-grained surge fronts. The model provides estimates of the amplification of peak depth, discharge, and shear stress needed for assessing postwildfire hazards and constraining models of bedrock incision.
Amplification of postwildfire peak flow by debris
Kean, Jason W.; McGuire, Luke; Rengers, Francis K.; Smith, Joel B.; Staley, Dennis M.
2016-01-01
In burned steeplands, the peak depth and discharge of postwildfire runoff can substantially increase from the addition of debris. Yet methods to estimate the increase over water flow are lacking. We quantified the potential amplification of peak stage and discharge using video observations of postwildfire runoff, compiled data on postwildfire peak flow (Qp), and a physically based model. Comparison of flood and debris flow data with similar distributions in drainage area (A) and rainfall intensity (I) showed that the median runoff coefficient (C = Qp/AI) of debris flows is 50 times greater than that of floods. The striking increase in Qp can be explained using a fully predictive model that describes the additional flow resistance caused by the emergence of coarse-grained surge fronts. The model provides estimates of the amplification of peak depth, discharge, and shear stress needed for assessing postwildfire hazards and constraining models of bedrock incision.
Novel angle estimation for bistatic MIMO radar using an improved MUSIC
NASA Astrophysics Data System (ADS)
Li, Jianfeng; Zhang, Xiaofei; Chen, Han
2014-09-01
In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.
Guler, Muhammet; Turkoglu, Vedat; Kivrak, Arif
2015-08-01
In the study, the electrochemical behavior of glucose oxidase (GOx) immobilized on poly([2,2';5',2″]-terthiophene-3'-carbaldehyde) (poly(TTP)) modified glassy carbon electrode (GCE) was investigated. The biosensor (poly(TTP)/GOx/GCE) showed a pair of redox peaks in 0.1 M phosphate buffer (pH 7.4) solution in the absence of oxygen the co-substrate of GOx. In here, Poly(TTP)/GOx/GCE biosensor acts as the co-substrate instead of oxygen. Upon the addition of glucose, the reduction and oxidation peak currents increased until the active site of GOx was fully saturated with glucose. The apparent m was estimated 26.13 mM from Lineweaver-Burk graph. The biosensor displayed a good stability and bioactivity. The biosensor showed a high sensitivity (56.1 nA/mM), a linear range (from 0.5 to 20.15 mM), and a good reproducibility with 3.6% of relative standard deviation. In addition, the interference currents of glycin, ascorbic acid, histidine, uric acid, dopamine, arginine, and fructose on GOx biosensor were investigated. All that substances exhibited an interference current under 10%. It was not shown a marked difference between GOx biosensor and spectrophotometric measurement of glucose in serum examples. UV-visible spectroscopy and scanning electron microscopy (SEM) experiments of the biosensor were also performed. Copyright © 2015 Elsevier B.V. All rights reserved.
Koltun, G.F.
2009-01-01
This report describes the results of a study to determine frequency characteristics of postregulation annual peak flows at streamflow-gaging stations at or near the Lockington, Taylorsville, Englewood, Huffman, and Germantown dry dams in the Miami Conservancy District flood-protection system (southwestern Ohio) and five other streamflow-gaging stations in the Great Miami River Basin further downstream from one or more of the dams. In addition, this report describes frequency characteristics of annual peak elevations of the dry-dam pools. In most cases, log-Pearson Type III distributions were fit to postregulation annual peak-flow values through 2007 (the most recent year of published peak-flow values at the time of this analysis) and annual peak dam-pool storage values for the period 1922-2008 to determine peaks with recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. For one streamflow-gaging station (03272100) with a short period of record, frequency characteristics were estimated by means of a process involving interpolation of peak-flow yields determined for an upstream and downstream gage. Once storages had been estimated for the various recurrence intervals, corresponding dam-pool elevations were determined from elevation-storage ratings provided by the Miami Conservancy District.
NASA Astrophysics Data System (ADS)
Fiechter, Jerome; Edwards, Christopher A.; Moore, Andrew M.
2018-04-01
A physical-biogeochemical model is used to produce a retrospective analysis at 3-km resolution of alongshore phytoplankton variability in the California Current during 1988-2010. The simulation benefits from downscaling a regional circulation reanalysis, which provides improved physical ocean state estimates in the high-resolution domain. The emerging pattern is one of local upwelling intensification in response to increased alongshore wind stress in the lee of capes, modulated by alongshore meanders in the geostrophic circulation. While stronger upwelling occurs near most major topographic features, substantial increases in phytoplankton biomass only ensue where local circulation patterns are conducive to on-shelf retention of upwelled nutrients. Locations of peak nutrient delivery and chlorophyll accumulation also exhibit interannual variability and trends noticeably larger than the surrounding shelf regions, thereby suggesting that long-term planktonic ecosystem response in the California Current exhibits a significant local scale (O(100 km)) alongshore component.
Attachment process in rocket-triggered lightning strokes
NASA Astrophysics Data System (ADS)
Wang, D.; Rakov, V. A.; Uman, M. A.; Takagi, N.; Watanabe, T.; Crawford, D. E.; Rambo, K. J.; Schnetzer, G. H.; Fisher, R. J.; Kawasaki, Z.-I.
1999-01-01
In order to study the lightning attachment process, we have obtained highly resolved (about 100 ns time resolution and about 3.6 m spatial resolution) optical images, electric field measurements, and channel-base current recordings for two dart leader/return-stroke sequences in two lightning flashes triggered using the rocket-and-wire technique at Camp Blanding, Florida. One of these two sequences exhibited an optically discernible upward-propagating discharge that occurred in response to the approaching downward-moving dart leader and connected to this descending leader. This observation provides the first direct evidence of the occurrence of upward connecting discharges in triggered lightning strokes, these strokes being similar to subsequent strokes in natural lightning. The observed upward connecting discharge had a light intensity one order of magnitude lower than its associated downward dart leader, a length of 7-11 m, and a duration of several hundred nanoseconds. The speed of the upward connecting discharge was estimated to be about 2 × 107 m/s, which is comparable to that of the downward dart leader. In both dart leader/return-stroke sequences studied, the return stroke was inferred to start at the point of junction between the downward dart leader and the upward connecting discharge and to propagate in both upward and downward directions. This latter inference provides indirect evidence of the occurrence of upward connecting discharges in both dart leader/return-stroke sequences even though one of these sequences did not have a discernible optical image of such a discharge. The length of the upward connecting discharges (observed in one case and inferred from the height of the return-stroke starting point in the other case) is greater for the event that is characterized by the larger leader electric field change and the higher return-stroke peak current. For the two dart leader/return-stroke sequences studied, the upward connecting discharge lengths are estimated to be 7-11 m and 4-7 m, with the corresponding return-stroke peak currents being 21 kA and 12 kA, and the corresponding leader electric field changes 30 m from the rocket launcher being 56 kV/m and 43 kV/m. Additionally, we note that the downward dart leader light pulse generally exhibits little variation in its 10-90% risetime and peak value over some tens of meters above the return-stroke starting point, while the following return-stroke light pulse shows an appreciable increase in risetime and a decrease in peak value while traversing the same section of the lightning channel. Our findings regarding (1) the initially bidirectional development of return-stroke process and (2) the relatively strong attenuation of the upward moving return-stroke light (and by inference current) pulse over the first some tens of meters of the channel may have important implications for return-stroke modeling.
NASA Astrophysics Data System (ADS)
Barbieux, Kévin; Nouchi, Vincent; Merminod, Bertrand
2016-10-01
Retrieving the water-leaving reflectance from airborne hyperspectral data implies to deal with three steps. Firstly, the radiance recorded by an airborne sensor comes from several sources: the real radiance of the object, the atmospheric scattering, sky and sun glint and the dark current of the sensor. Secondly, the dispersive element inside the sensor (usually a diffraction grating or a prism) could move during the flight, thus shifting the observed spectra on the wavelengths axis. Thirdly, to compute the reflectance, it is necessary to estimate, for each band, what value of irradiance corresponds to a 100% reflectance. We present here our calibration method, relying on the absorption features of the atmosphere and the near-infrared properties of common materials. By choosing proper flight height and flight lines angle, we can ignore atmospheric and sun glint contributions. Autocorrelation plots allow to identify and reduce the noise in our signals. Then, we compute a signal that represents the high frequencies of the spectrum, to localize the atmospheric absorption peaks (mainly the dioxygen peak around 760 nm). Matching these peaks removes the shift induced by the moving dispersive element. Finally, we use the signal collected over a Lambertian, unit-reflectance surface to estimate the ratio of the system's transmittances to its near-infrared transmittance. This transmittance is computed assuming an average 50% reflectance of the vegetation and nearly 0% for water in the near-infrared. Results show great correlation between the output spectra and ground measurements from a TriOS Ramses and the water-insight WISP-3.
Radar probing of surfactant films on the water surface using dual co-polarized SAR
NASA Astrophysics Data System (ADS)
Ermakov, S.; da Silva, J. C. B.; Kapustin, I.; Molkov, A.; Sergievskaya, I.; Shomina, O.
2016-10-01
Retrieving the water-leaving reflectance from airborne hyperspectral data implies to deal with three steps. Firstly, the radiance recorded by an airborne sensor comes from several sources: the real radiance of the object, the atmospheric scattering, sky and sun glint and the dark current of the sensor. Secondly, the dispersive element inside the sensor (usually a diffraction grating or a prism) could move during the flight, thus shifting the observed spectra on the wavelengths axis. Thirdly, to compute the reflectance, it is necessary to estimate, for each band, what value of irradiance corresponds to a 100% reflectance. We present here our calibration method, relying on the absorption features of the atmosphere and the near-infrared properties of common materials. By choosing proper flight height and flight lines angle, we can ignore atmospheric and sun glint contributions. Autocorrelation plots allow to identify and reduce the noise in our signals. Then, we compute a signal that represents the high frequencies of the spectrum, to localize the atmospheric absorption peaks (mainly the dioxygen peak around 760 nm). Matching these peaks removes the shift induced by the moving dispersive element. Finally, we use the signal collected over a Lambertian, unit-reflectance surface to estimate the ratio of the system's transmittances to its near-infrared transmittance. This transmittance is computed assuming an average 50% reflectance of the vegetation and nearly 0% for water in the near-infrared. Results show great correlation between the output spectra and ground measurements from a TriOS Ramses and the water-insight WISP-3.
NASA Astrophysics Data System (ADS)
Schwanghart, Wolfgang; Worni, Raphael; Huggel, Christian; Stoffel, Markus; Korup, Oliver
2016-07-01
Himalayan water resources attract a rapidly growing number of hydroelectric power projects (HPP) to satisfy Asia’s soaring energy demands. Yet HPP operating or planned in steep, glacier-fed mountain rivers face hazards of glacial lake outburst floods (GLOFs) that can damage hydropower infrastructure, alter water and sediment yields, and compromise livelihoods downstream. Detailed appraisals of such GLOF hazards are limited to case studies, however, and a more comprehensive, systematic analysis remains elusive. To this end we estimate the regional exposure of 257 Himalayan HPP to GLOFs, using a flood-wave propagation model fed by Monte Carlo-derived outburst volumes of >2300 glacial lakes. We interpret the spread of thus modeled peak discharges as a predictive uncertainty that arises mainly from outburst volumes and dam-breach rates that are difficult to assess before dams fail. With 66% of sampled HPP are on potential GLOF tracks, up to one third of these HPP could experience GLOF discharges well above local design floods, as hydropower development continues to seek higher sites closer to glacial lakes. We compute that this systematic push of HPP into headwaters effectively doubles the uncertainty about GLOF peak discharge in these locations. Peak discharges farther downstream, in contrast, are easier to predict because GLOF waves attenuate rapidly. Considering this systematic pattern of regional GLOF exposure might aid the site selection of future Himalayan HPP. Our method can augment, and help to regularly update, current hazard assessments, given that global warming is likely changing the number and size of Himalayan meltwater lakes.
The photocurrent, noise and spectral sensitivity of rods of the monkey Macaca fascicularis.
Baylor, D A; Nunn, B J; Schnapf, J L
1984-01-01
Visual transduction in rods of the cynomolgus monkey, Macaca fascicularis, was studied by recording membrane current from single outer segments projecting from small pieces of retina. Light flashes evoked transient outward-going photocurrents with saturating amplitudes of up to 34 pA. A flash causing twenty to fifty photoisomerizations gave a response of half the saturating amplitude. The response-stimulus relation was of the form 1-e-x where x is flash strength. The response to a dim flash usually had a time to peak of 150-250 ms and resembled the impulse response of a series of six low-pass filters. From the average spectral sensitivity of ten rods the rhodopsin was estimated to have a peak absorption near 491 nm. The spectral sensitivity of the rods was in good agreement with the average human scotopic visibility curve determined by Crawford (1949), when the human curve was corrected for lens absorption and self-screening of rhodopsin. Fluctuations in the photocurrent evoked by dim lights were consistent with a quantal event about 0.7 pA in peak amplitude. A steady light causing about 100 photoisomerizations s-1 reduced the flash sensitivity to half the dark-adapted value. At higher background levels the rod rapidly saturated. These results support the idea that dim background light desensitizes human scotopic vision by a mechanism central to the rod outer segments while scotopic saturation may occur within the outer segments. Recovery of the photocurrent after bright flashes was marked by quantized step-like events. The events had the properties expected if bleached rhodopsin in the disks occasionally caused an abrupt blockage of the dark current over about one-twentieth of the length of the outer segment. It is suggested that superposition of these events after bleaching may contribute to the threshold elevation measured psychophysically. The current in darkness showed random fluctuations which disappeared in bright light. The continuous component of the noise had a variance of about 0.03 pA2 and a power spectrum that fell to half near 3 Hz. A second component, consisting of discrete events resembling single-photon responses, was estimated to occur at a rate of 0.006 s-1. It is suggested that the continuous component of the noise may be removed from scotopic vision by a thresholding operation near the rod output. PMID:6512705
The photocurrent, noise and spectral sensitivity of rods of the monkey Macaca fascicularis.
Baylor, D A; Nunn, B J; Schnapf, J L
1984-12-01
Visual transduction in rods of the cynomolgus monkey, Macaca fascicularis, was studied by recording membrane current from single outer segments projecting from small pieces of retina. Light flashes evoked transient outward-going photocurrents with saturating amplitudes of up to 34 pA. A flash causing twenty to fifty photoisomerizations gave a response of half the saturating amplitude. The response-stimulus relation was of the form 1-e-x where x is flash strength. The response to a dim flash usually had a time to peak of 150-250 ms and resembled the impulse response of a series of six low-pass filters. From the average spectral sensitivity of ten rods the rhodopsin was estimated to have a peak absorption near 491 nm. The spectral sensitivity of the rods was in good agreement with the average human scotopic visibility curve determined by Crawford (1949), when the human curve was corrected for lens absorption and self-screening of rhodopsin. Fluctuations in the photocurrent evoked by dim lights were consistent with a quantal event about 0.7 pA in peak amplitude. A steady light causing about 100 photoisomerizations s-1 reduced the flash sensitivity to half the dark-adapted value. At higher background levels the rod rapidly saturated. These results support the idea that dim background light desensitizes human scotopic vision by a mechanism central to the rod outer segments while scotopic saturation may occur within the outer segments. Recovery of the photocurrent after bright flashes was marked by quantized step-like events. The events had the properties expected if bleached rhodopsin in the disks occasionally caused an abrupt blockage of the dark current over about one-twentieth of the length of the outer segment. It is suggested that superposition of these events after bleaching may contribute to the threshold elevation measured psychophysically. The current in darkness showed random fluctuations which disappeared in bright light. The continuous component of the noise had a variance of about 0.03 pA2 and a power spectrum that fell to half near 3 Hz. A second component, consisting of discrete events resembling single-photon responses, was estimated to occur at a rate of 0.006 s-1. It is suggested that the continuous component of the noise may be removed from scotopic vision by a thresholding operation near the rod output.
Comparison of Peak-Flow Estimation Methods for Small Drainage Basins in Maine
Hodgkins, Glenn A.; Hebson, Charles; Lombard, Pamela J.; Mann, Alexander
2007-01-01
Understanding the accuracy of commonly used methods for estimating peak streamflows is important because the designs of bridges, culverts, and other river structures are based on these flows. Different methods for estimating peak streamflows were analyzed for small drainage basins in Maine. For the smallest basins, with drainage areas of 0.2 to 1.0 square mile, nine peak streamflows from actual rainfall events at four crest-stage gaging stations were modeled by the Rational Method and the Natural Resource Conservation Service TR-20 method and compared to observed peak flows. The Rational Method had a root mean square error (RMSE) of -69.7 to 230 percent (which means that approximately two thirds of the modeled flows were within -69.7 to 230 percent of the observed flows). The TR-20 method had an RMSE of -98.0 to 5,010 percent. Both the Rational Method and TR-20 underestimated the observed flows in most cases. For small basins, with drainage areas of 1.0 to 10 square miles, modeled peak flows were compared to observed statistical peak flows with return periods of 2, 50, and 100 years for 17 streams in Maine and adjoining parts of New Hampshire. Peak flows were modeled by the Rational Method, the Natural Resources Conservation Service TR-20 method, U.S. Geological Survey regression equations, and the Probabilistic Rational Method. The regression equations were the most accurate method of computing peak flows in Maine for streams with drainage areas of 1.0 to 10 square miles with an RMSE of -34.3 to 52.2 percent for 50-year peak flows. The Probabilistic Rational Method was the next most accurate method (-38.5 to 62.6 percent). The Rational Method (-56.1 to 128 percent) and particularly the TR-20 method (-76.4 to 323 percent) had much larger errors. Both the TR-20 and regression methods had similar numbers of underpredictions and overpredictions. The Rational Method overpredicted most peak flows and the Probabilistic Rational Method tended to overpredict peak flows from the smaller (less than 5 square miles) drainage basins and underpredict peak flows from larger drainage basins. The results of this study are consistent with the most comprehensive analysis of observed and modeled peak streamflows in the United States, which analyzed statistical peak flows from 70 drainage basins in the Midwest and the Northwest.
Wan, Jinjin; He, Fangli; Zhao, Yongfeng; Zhang, Hongmei; Zhou, Xiaodong; Wan, Mingxi
2014-03-01
The aim of this work was to develop a convenient method for radial/circumferential strain imaging and shear rate estimation that could be used as a supplement to the current routine screening for carotid atherosclerosis using video images of diagnostic ultrasound. A reflection model-based correction for gray-scale non-uniform distribution was applied to B-mode video images before strain estimation to improve the accuracy of radial/circumferential strain imaging when applied to vessel transverse cross sections. The incremental and cumulative radial/circumferential strain images can then be calculated based on the displacement field between consecutive B-mode images. Finally, the transverse Doppler spectra acquired at different depths along the vessel diameter were used to construct the spatially matched instantaneous wall shear values in a cardiac cycle. Vessel phantom simulation results revealed that the signal-to-noise ratio and contrast-to-noise ratio of the radial and circumferential strain images were increased by 2.8 and 5.9 dB and by 2.3 and 4.4 dB, respectively, after non-uniform correction. Preliminary results for 17 patients indicated that the accuracy of radial/circumferential strain images was improved in the lateral direction after non-uniform correction. The peak-to-peak value of incremental strain and the maximum cumulative strain for calcified plaques are evidently lower than those for other plaque types, and the echolucent plaques had higher values, on average, than the mixed plaques. Moreover, low oscillating wall shear rate values, found near the plaque and stenosis regions, are closely related to plaque formation. In conclusion, the method described can provide additional valuable results as a supplement to the current routine ultrasound examination for carotid atherosclerosis and, therefore, has significant potential as a feasible screening method for atherosclerosis diagnosis in the future. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Amey, David L.; Degner, Michael W.
2002-01-01
A method for reducing the starting time and reducing the peak phase currents for an internal combustion engine that is started using an induction machine starter/alternator. The starting time is reduced by pre-fluxing the induction machine and the peak phase currents are reduced by reducing the flux current command after a predetermined period of time has elapsed and concurrent to the application of the torque current command. The method of the present invention also provides a strategy for anticipating the start command for an internal combustion engine and determines a start strategy based on the start command and the operating state of the internal combustion engine.
Peak-flow frequency relations and evaluation of the peak-flow gaging network in Nebraska
Soenksen, Philip J.; Miller, Lisa D.; Sharpe, Jennifer B.; Watton, Jason R.
1999-01-01
Estimates of peak-flow magnitude and frequency are required for the efficient design of structures that convey flood flows or occupy floodways, such as bridges, culverts, and roads. The U.S. Geological Survey, in cooperation with the Nebraska Department of Roads, conducted a study to update peak-flow frequency analyses for selected streamflow-gaging stations, develop a new set of peak-flow frequency relations for ungaged streams, and evaluate the peak-flow gaging-station network for Nebraska. Data from stations located in or within about 50 miles of Nebraska were analyzed using guidelines of the Interagency Advisory Committee on Water Data in Bulletin 17B. New generalized skew relations were developed for use in frequency analyses of unregulated streams. Thirty-three drainage-basin characteristics related to morphology, soils, and precipitation were quantified using a geographic information system, related computer programs, and digital spatial data.For unregulated streams, eight sets of regional regression equations relating drainage-basin to peak-flow characteristics were developed for seven regions of the state using a generalized least squares procedure. Two sets of regional peak-flow frequency equations were developed for basins with average soil permeability greater than 4 inches per hour, and six sets of equations were developed for specific geographic areas, usually based on drainage-basin boundaries. Standard errors of estimate for the 100-year frequency equations (1percent probability) ranged from 12.1 to 63.8 percent. For regulated reaches of nine streams, graphs of peak flow for standard frequencies and distance upstream of the mouth were estimated.The regional networks of streamflow-gaging stations on unregulated streams were analyzed to evaluate how additional data might affect the average sampling errors of the newly developed peak-flow equations for the 100-year frequency occurrence. Results indicated that data from new stations, rather than more data from existing stations, probably would produce the greatest reduction in average sampling errors of the equations.
NASA Astrophysics Data System (ADS)
Xie, Y.; Wilson, A. M.
2017-12-01
Plant phenology studies typically focus on the beginning and end of the growing season in temperate forests. We know too little about fall foliage peak coloration, which is a bioindicator of plant response in autumn to environmental changes, an important visual cue in fall associated with animal activities, and a key element in fall foliage ecotourism. Spatiotemporal changes in timing of fall foliage peak coloration of temperate forests and the associated environmental controls are not well understood. In this study, we examined multiple color indices to estimate Land Surface Phenology (LSP) of fall foliage peak coloration of deciduous forest in the northeastern USA using Moderate Resolution Imaging Spectroradiometer (MODIS) daily imagery from 2000 to 2015. We used long term phenology ground observations to validate our estimated LSP, and found that Visible Atmospherically Resistant Index (VARI) and Plant Senescence Reflectance Index (PSRI) were good metrics to estimate peak and end of leaf coloration period of deciduous forest. During the past 16 years, the length of period with peak fall foliage color of deciduous forest at southern New England and northern Appalachian forests regions became longer (0.3 7.7 days), mainly driven by earlier peak coloration. Northern New England, southern Appalachian forests and Ozark and Ouachita mountains areas had shorter period (‒0.2 ‒9.2 days) mainly due to earlier end of leaf coloration. Changes in peak and end of leaf coloration not only were associated with changing temperature in spring and fall, but also to drought and heat in summer, and heavy precipitation in both summer and fall. The associations between leaf peak coloration phenology and climatic variations were not consistent among ecoregions. Our findings suggested divergent change patterns in fall foliage peak coloration phenology in deciduous forests, and improved our understanding in the environmental control on timing of fall foliage color change.
Variation in light intensity with height and time from subsequent lightning return strokes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, D.M.; Uman, M.A.
1983-08-20
Relative light intensity has been measured photographically as a function of height and time for seven subsequent return strokes in two lightning flashes at ranges of 7.8 and 8.7 km. The film used was Kodak 5474 Shellburst, which has a roughly constant spectral response between 300 and 670 nm. The time resolution was about 1.0 ..mu..s, and the spatial resolution was about 4 m. The observed light signals consisted of a fast rise to peak, followed by a slower decrease to a relatively constant value. The amplitude of the initial light peak decreases exponentially with height with a decay constantmore » of about 0.6 to 0.8 km. The 20% to 80% rise time of the initial light signal is between 1 and 4 ..mu..s near ground and increases by an additional 1 to 2 ..mu..s by the time the return stroke reaches the cloud base, a height between 1 and 2 km. The light intensity 30 ..mu..s after the initial peak is relatively constant with height and has an amplitude that is 15% to 30% of the initial peak near the ground and 50% to 100% of the initial peak at cloud base. The logarithm of the peak light intensity near the ground is roughly proportional to the initial peak electric field intensity, and this in turn implies that the current decrease with height may be much slower than the light decrease. The absolute light intensity has been estimated by integrating the photographic signals from individual channel segments to simulate the calibrated all-sky photoelectric data of Guo and Krider (1982). Using this method, the authors find that the mean peak radiance near the ground is 8.3 x 10/sup 5/ W/m, with a total range from 1.4 x 10/sup 5/ to 3.8 x 10/sup 6/ W/m. 16 references, 11 figures.« less
NASA Astrophysics Data System (ADS)
Xie, L.; Pietrafesa, L. J.; Wu, K.
2003-02-01
A three-dimensional wave-current coupled modeling system is used to examine the influence of waves on coastal currents and sea level. This coupled modeling system consists of the wave model-WAM (Cycle 4) and the Princeton Ocean Model (POM). The results from this study show that it is important to incorporate surface wave effects into coastal storm surge and circulation models. Specifically, we find that (1) storm surge models without coupled surface waves generally under estimate not only the peak surge but also the coastal water level drop which can also cause substantial impact on the coastal environment, (2) introducing wave-induced surface stress effect into storm surge models can significantly improve storm surge prediction, (3) incorporating wave-induced bottom stress into the coupled wave-current model further improves storm surge prediction, and (4) calibration of the wave module according to minimum error in significant wave height does not necessarily result in an optimum wave module in a wave-current coupled system for current and storm surge prediction.
Sherwood, James M.; Ebner, Andrew D.; Koltun, G.F.; Astifan, Brian M.
2007-01-01
Heavy rains caused severe flooding on June 22-24, 2006, and damaged approximately 4,580 homes and 48 businesses in Cuyahoga County. Damage estimates in Cuyahoga County for the two days of flooding exceed $47 million; statewide damage estimates exceed $150 million. Six counties (Cuyahoga, Erie, Huron, Lucas, Sandusky, and Stark) in northeast Ohio were declared Federal disaster areas. One death, in Lorain County, was attributed to the flooding. The peak streamflow of 25,400 cubic feet per second and corresponding peak gage height of 23.29 feet were the highest recorded at the U.S. Geological Survey (USGS) streamflow-gaging station Cuyahoga River at Independence (04208000) since the gaging station began operation in 1922, exceeding the previous peak streamflow of 24,800 cubic feet per second that occurred on January 22, 1959. An indirect calculation of the peak streamflow was made by use of a step-backwater model because all roads leading to the gaging station were inundated during the flood and field crews could not reach the station to make a direct measurement. Because of a statistically significant and persistent positive trend in the annual-peak-streamflow time series for the Cuyahoga River at Independence, a method was developed and applied to detrend the annual-peak-streamflow time series prior to the traditional log-Pearson Type III flood-frequency analysis. Based on this analysis, the recurrence interval of the computed peak streamflow was estimated to be slightly less than 100 years. Peak-gage-height data, peak-streamflow data, and recurrence-interval estimates for the June 22-24, 2006, flood are tabulated for the Cuyahoga River at Independence and 10 other USGS gaging stations in north-central Ohio. Because flooding along the Cuyahoga River near Independence and Valley View was particularly severe, a study was done to document the peak water-surface profile during the flood from approximately 2 miles downstream from the USGS streamflow-gaging station at Independence to approximately 2 miles upstream from the gaging station. High-water marks were identified and flagged in the field. Third-order-accuracy surveys were used to determine elevations of the high-water marks, and the data were tabulated and plotted.
Raines, Timothy H.
1998-01-01
The potential extreme peak-discharge curves as related to contributing drainage area were estimated for each of the three hydrologic regions from measured extreme peaks of record at 186 sites with streamflow-gaging stations and from measured extreme peaks at 37 sites without streamflow-gaging stations in and near the Brazos River Basin. The potential extreme peak-discharge curves generally are similar for hydrologic regions 1 and 2, and the curve for region 3 consistently is below the curves for regions 1 and 2, which indicates smaller peak discharges.
On nonstationarity-related errors in modal combination rules of the response spectrum method
NASA Astrophysics Data System (ADS)
Pathak, Shashank; Gupta, Vinay K.
2017-10-01
Characterization of seismic hazard via (elastic) design spectra and the estimation of linear peak response of a given structure from this characterization continue to form the basis of earthquake-resistant design philosophy in various codes of practice all over the world. Since the direct use of design spectrum ordinates is a preferred option for the practicing engineers, modal combination rules play central role in the peak response estimation. Most of the available modal combination rules are however based on the assumption that nonstationarity affects the structural response alike at the modal and overall response levels. This study considers those situations where this assumption may cause significant errors in the peak response estimation, and preliminary models are proposed for the estimation of the extents to which nonstationarity affects the modal and total system responses, when the ground acceleration process is assumed to be a stationary process. It is shown through numerical examples in the context of complete-quadratic-combination (CQC) method that the nonstationarity-related errors in the estimation of peak base shear may be significant, when strong-motion duration of the excitation is too small compared to the period of the system and/or the response is distributed comparably in several modes. It is also shown that these errors are reduced marginally with the use of the proposed nonstationarity factor models.
Ivandini, Tribidasari A; Wicaksono, Wiyogo P; Saepudin, Endang; Rismetov, Bakhadir; Einaga, Yasuaki
2015-03-01
Anodic stripping voltammetry (ASV) of colloidal gold-nanoparticles (AuNPs) was investigated at boron-doped diamond (BDD) electrodes in 50 mM HClO4. A deposition time of 300 s at-0.2 V (vs. Ag/AgCl) was fixed as the condition for the ASV. The voltammograms showed oxidation peaks that could be attributed to the oxidation of gold. These oxidation peaks were then investigated for potential application in immunochromatographic strip tests for the selective and quantitative detection of melamine, in which AuNPs were used as the label for the antibody of melamine. Linear regression of the oxidation peak currents appeared in the concentration range from 0.05-0.6 μg/mL melamine standard, with an estimated LOD of 0.069 μg/mL and an average relative standard deviation of 8.0%. This indicated that the method could be considered as an alternative method for selective and quantitative immunochromatographic applications. The validity was examined by the measurements of melamine injected into milk samples, which showed good recovery percentages during the measurements. Copyright © 2014 Elsevier B.V. All rights reserved.
The dichotomous response of flood and storm extremes to rising global temperatures
NASA Astrophysics Data System (ADS)
Sharma, A.; Wasko, C.
2017-12-01
Rising temperature have resulted in increases in short-duration rainfall extremes across the world. Additionally it has been shown (doi:10.1038/ngeo2456) that storms will intensify, causing derived flood peaks to rise even more. This leads us to speculate that flood peaks will increase as a result, complying with the storyline presented in past IPCC reports. This talk, however, shows that changes in flood extremes are much more complex. Using global data on extreme flow events, the study conclusively shows that while the very extreme floods may be rising as a result of storm intensification, the more frequent flood events are decreasing in magnitude. The study argues that changes in the magnitude of floods are a function of changes in storm patterns and as well as pre-storm or antecedent conditions. It goes on to show that while changes in storms dominate for the most extreme events and over smaller, more urbanised catchments, changes in pre-storm conditions are the driving factor in modulating flood peaks in large rural catchments. The study concludes by providing recommendations on how future flood design should proceed, arguing that current practices (or using a design storm to estimate floods) are flawed and need changing.
The Earthquake Early Warning System In Southern Italy: Performance Tests And Next Developments
NASA Astrophysics Data System (ADS)
Zollo, A.; Elia, L.; Martino, C.; Colombelli, S.; Emolo, A.; Festa, G.; Iannaccone, G.
2011-12-01
PRESTo (PRobabilistic and Evolutionary early warning SysTem) is the software platform for Earthquake Early Warning (EEW) in Southern Italy, that integrates recent algorithms for real-time earthquake location, magnitude estimation and damage assessment, into a highly configurable and easily portable package. The system is under active experimentation based on the Irpinia Seismic Network (ISNet). PRESTo processes the live streams of 3C acceleration data for P-wave arrival detection and, while an event is occurring, promptly performs event detection and provides location, magnitude estimations and peak ground shaking predictions at target sites. The earthquake location is obtained by an evolutionary, real-time probabilistic approach based on an equal differential time formulation. At each time step, it uses information from both triggered and not-yet-triggered stations. Magnitude estimation exploits an empirical relationship that correlates it to the filtered Peak Displacement (Pd), measured over the first 2-4 s of P-signal. Peak ground-motion parameters at any distance can be finally estimated by ground motion prediction equations. Alarm messages containing the updated estimates of these parameters can thus reach target sites before the destructive waves, enabling automatic safety procedures. Using the real-time data streaming from the ISNet network, PRESTo has produced a bulletin for about a hundred low-magnitude events occurred during last two years. Meanwhile, the performances of the EEW system were assessed off-line playing-back the records for moderate and large events from Italy, Spain and Japan and synthetic waveforms for large historical events in Italy. These tests have shown that, when a dense seismic network is deployed in the fault area, PRESTo produces reliable estimates of earthquake location and size within 5-6 s from the event origin time (To). Estimates are provided as probability density functions whose uncertainty typically decreases with time, obtaining a stable solution within 10 s from To. The regional approach was recently integrated with a threshold-based early warning method for the definition of alert levels and the estimation of the Potential Damaged Zone (PDZ) in which the highest intensity levels are expected. The dominant period Tau_c and the peak displacement (Pd) are simultaneously measured in a 3s window after the first P-arrival time. Pd and Tau_c are then compared with threshold values, previously established through an empirical regression analysis, that define a decisional table with four alert levels. According to the real-time measured values of Pd and tau_c, each station provides a local alert level that can be used to warn distant sites and to define the extent of the PDZ. The integrated system was validated off-line for the M6.3, 2009 Central Italy earthquake and ten large Japanese events, due to the low-magnitude events currently occurring in Irpinia. The results confirmed the feasibility and the robustness of such an approach, providing reliable predictions of the earthquake damaging effects, that is a relevant information for the efficient planning of the rescue operations in the immediate post-event emergency phase.
Submicrosecond characteristics of lightning return-stroke currents
NASA Technical Reports Server (NTRS)
Leteinturier, Christiane; Hamelin, Joel H.; Eybert-Berard, Andre
1991-01-01
The authors describe the experimental results obtained during 1987 and 1988 triggered-lightning experiments in Florida. Seventy-four simultaneous submicrosecond time-resolved measurements of triggered return-stroke current (I) and current derivative (dI/dt) were made in Florida in 1987 and 1988. Peak currents ranged from about 5 to 76 kA, peak dI/dt amplitude from 13 to 411 kA/microsec and rise time from 90 to 1000 ns. The mean peak dI/dt values of 110 kA/microsec were 2-3 times higher than data from instrumented towers and peak I and dI/dt appear to be positively correlated. These data confirm previous experiments and conclusions supported by forty measurements. They are important in order to define, for example, standards for lightning protection. Present standards give a dI/dt maximum of 140 kA/microsec.
Jet noise suppressor nozzle development for augmentor wing jet STOL research aircraft (C-8A Buffalo)
NASA Technical Reports Server (NTRS)
Harkonen, D. L.; Marks, C. C.; Okeefe, J. V.
1974-01-01
Noise and performance test results are presented for a full-scale advanced design rectangular array lobe jet suppressor nozzle (plain wall and corrugated). Flight design and installation considerations are also discussed. Noise data are presented in terms of peak PNLT (perceived noise level, tone corrected) suppression relative to the existing airplane and one-third octave-band spectra. Nozzle performance is presented in terms of velocity coefficient. Estimates of the hot thrust available during emergency (engine out) with the suppressor nozzle installed are compared with the current thrust levels produced by the round convergent nozzles.
1990-05-10
slope of a plot of peak current vs. sweep rate to the charge should be equal to nF/4RT, where n is the number of electrons per adsorbed species and...overlapping waves. b) Values ofAEp : (Ep. - Epe) are given at a potential sweep rate of 50 mV/s. c) data fom ref.8g. d) Potentials are approximate because...cyclic voltametry at 200, 100, 50 and 20 mV/s, Average data being E : (Eps + Epc)/2 are reported. Data in parenthesis are estimated from overlapping
Extreme changes in the dayside ionosphere during a Carrington-type magnetic storm
NASA Astrophysics Data System (ADS)
Tsurutani, Bruce T.; Verkhoglyadova, Olga P.; Mannucci, Anthony J.; Lakhina, Gurbax S.; Huba, Joseph D.
2012-06-01
It is shown that during the 30 October 2003 superstorm, dayside O+ ions were uplifted to DMSP altitudes (~850 km). Peak densities were ~9 × 105 cm-3 during the magnetic storm main phase (peak Dst = -390 nT). By comparison the 1-2 September 1859 Carrington magnetic storm (peak Dst estimated at -1760 nT) was considerably stronger. We investigate the impact of this storm on the low- to mid-latitude ionosphere using a modified version of the NRL SAMI2 ionospheric code. It is found that the equatorial region (LAT = 0° ± 15°) is swept free of plasma within 15 min (or less) of storm onset. The plasma is swept to higher altitudes and higher latitudes due to E × B convection associated with the prompt penetration electric field. Equatorial Ionization Anomaly (EIA) O+ density enhancements are found to be located within the broad range of latitudes ~ ± (25°-40°) at ~500-900 km altitudes. Densities within these peaks are ~6 × 106 oxygen ions-cm-3 at ~700 km altitude, approximately +600% quiet time values. The oxygen ions at the top portions (850-1000 km) of uplifted EIAs will cause strong low-altitude satellite drag. Calculations are currently being performed on possible uplift of oxygen neutrals by ion-neutral coupling to understand if there might be further significant satellite drag forces present.
Zhang, Di; Cagnon, Chris H; Villablanca, J Pablo; McCollough, Cynthia H; Cody, Dianna D; Zankl, Maria; Demarco, John J; McNitt-Gray, Michael F
2013-09-01
CT neuroperfusion examinations are capable of delivering high radiation dose to the skin or lens of the eyes of a patient and can possibly cause deterministic radiation injury. The purpose of this study is to: (a) estimate peak skin dose and eye lens dose from CT neuroperfusion examinations based on several voxelized adult patient models of different head size and (b) investigate how well those doses can be approximated by some commonly used CT dose metrics or tools, such as CTDIvol, American Association of Physicists in Medicine (AAPM) Report No. 111 style peak dose measurements, and the ImPACT organ dose calculator spreadsheet. Monte Carlo simulation methods were used to estimate peak skin and eye lens dose on voxelized patient models, including GSF's Irene, Frank, Donna, and Golem, on four scanners from the major manufacturers at the widest collimation under all available tube potentials. Doses were reported on a per 100 mAs basis. CTDIvol measurements for a 16 cm CTDI phantom, AAPM Report No. 111 style peak dose measurements, and ImPACT calculations were performed for available scanners at all tube potentials. These were then compared with results from Monte Carlo simulations. The dose variations across the different voxelized patient models were small. Dependent on the tube potential and scanner and patient model, CTDIvol values overestimated peak skin dose by 26%-65%, and overestimated eye lens dose by 33%-106%, when compared to Monte Carlo simulations. AAPM Report No. 111 style measurements were much closer to peak skin estimates ranging from a 14% underestimate to a 33% overestimate, and with eye lens dose estimates ranging from a 9% underestimate to a 66% overestimate. The ImPACT spreadsheet overestimated eye lens dose by 2%-82% relative to voxelized model simulations. CTDIvol consistently overestimates dose to eye lens and skin. The ImPACT tool also overestimated dose to eye lenses. As such they are still useful as a conservative predictor of dose for CT neuroperfusion studies. AAPM Report No. 111 style measurements are a better predictor of both peak skin and eye lens dose than CTDIvol and ImPACT for the patient models used in this study. It should be remembered that both the AAPM Report No. 111 peak dose metric and CTDIvol dose metric are dose indices and were not intended to represent actual organ doses.
Zhang, Di; Cagnon, Chris H.; Villablanca, J. Pablo; McCollough, Cynthia H.; Cody, Dianna D.; Zankl, Maria; Demarco, John J.; McNitt-Gray, Michael F.
2013-01-01
Purpose: CT neuroperfusion examinations are capable of delivering high radiation dose to the skin or lens of the eyes of a patient and can possibly cause deterministic radiation injury. The purpose of this study is to: (a) estimate peak skin dose and eye lens dose from CT neuroperfusion examinations based on several voxelized adult patient models of different head size and (b) investigate how well those doses can be approximated by some commonly used CT dose metrics or tools, such as CTDIvol, American Association of Physicists in Medicine (AAPM) Report No. 111 style peak dose measurements, and the ImPACT organ dose calculator spreadsheet. Methods: Monte Carlo simulation methods were used to estimate peak skin and eye lens dose on voxelized patient models, including GSF's Irene, Frank, Donna, and Golem, on four scanners from the major manufacturers at the widest collimation under all available tube potentials. Doses were reported on a per 100 mAs basis. CTDIvol measurements for a 16 cm CTDI phantom, AAPM Report No. 111 style peak dose measurements, and ImPACT calculations were performed for available scanners at all tube potentials. These were then compared with results from Monte Carlo simulations. Results: The dose variations across the different voxelized patient models were small. Dependent on the tube potential and scanner and patient model, CTDIvol values overestimated peak skin dose by 26%–65%, and overestimated eye lens dose by 33%–106%, when compared to Monte Carlo simulations. AAPM Report No. 111 style measurements were much closer to peak skin estimates ranging from a 14% underestimate to a 33% overestimate, and with eye lens dose estimates ranging from a 9% underestimate to a 66% overestimate. The ImPACT spreadsheet overestimated eye lens dose by 2%–82% relative to voxelized model simulations. Conclusions: CTDIvol consistently overestimates dose to eye lens and skin. The ImPACT tool also overestimated dose to eye lenses. As such they are still useful as a conservative predictor of dose for CT neuroperfusion studies. AAPM Report No. 111 style measurements are a better predictor of both peak skin and eye lens dose than CTDIvol and ImPACT for the patient models used in this study. It should be remembered that both the AAPM Report No. 111 peak dose metric and CTDIvol dose metric are dose indices and were not intended to represent actual organ doses. PMID:24007152
Cloud-to-ground lightning flash characteristics from June 1984 through May 1985
NASA Technical Reports Server (NTRS)
Orville, Richard E.; Weisman, Robert A.; Pyle, Richard B.; Henderson, Ronald W.; Orville, Richard E., Jr.
1987-01-01
A magnetic direction-finding network for the detection of lightning cloud-to-ground strikes has been installed along the east coast of the United States. Time, location, flash polarity, stroke count, and peak signal amplitude are recorded in real time. The data were recorded from Maine to North Carolina and as far west as Ohio; analyses were restricted to flashes within 300 km of a direction finder. Measurements of peak signal strength have been obtained from 720,284 first return strokes lowering negative charge. The resulting distribution indicates that few negative strokes have peak currents exceeding 100 kA. Measurements have also been obtained of peak signal strength from 17,694 first return strokes lowering positive charge. These strokes have a median peak current of 45 kA, with some peak currents reaching 300-400 kA. The median peak signal strength and the peak current, double from summer to winter for both negative and positive first return strokes. The polarity of ground flashes is observed to be less than 5 percent positive throughout the summer and early fall, then increases to over 50 percent during the winter, and returns to less than 10 percent in early spring. The percent of positive flashes with one stroke is observed to be approximately 90 percent throughout the year. The percent of negative flashes with one stroke is observed to increase from 40 percent in the summer to approximately 80 percent in January, returning to less than 50 percent in the spring.
Using the Human Activity Profile to Assess Functional Performance in Heart Failure.
Ribeiro-Samora, Giane Amorim; Pereira, Danielle Aparecida Gomes; Vieira, Otávia Alves; de Alencar, Maria Clara Noman; Rodrigues, Roseane Santo; Carvalho, Maria Luiza Vieira; Montemezzo, Dayane; Britto, Raquel Rodrigues
2016-01-01
To investigate (1) the validity of using the Human Activity Profile (HAP) in patients with heart failure (HF) to estimate functional capacity; (2) the association between the HAP and 6-Minute Walk Test (6MWT) distance; and (3) the ability of the HAP to differentiate between New York Heart Association (NYHA) functional classes. In a cross-sectional study, we evaluated 62 clinically stable patients with HF (mean age, 47.98 years; NYHA class I-III). Variables included maximal functional capacity as measured by peak oxygen uptake ((Equation is included in full-text article.)O2) using a cardiopulmonary exercise test (CPET), peak (Equation is included in full-text article.)O2 as estimated by the HAP, and exercise capacity as measured by the 6MWT. The difference between the measured (CPET) and estimated (HAP) peak (Equation is included in full-text article.)O2 against the average values showed a bias of 2.18 mL/kg/min (P = .007). No agreement was seen between these measures when applying the Bland-Altman method. Peak (Equation is included in full-text article.)O2 in the HAP showed a moderate association with the 6MWT distance (r = 0.62; P < .0001). Peak (Equation is included in full-text article.)O2 in the HAP was able to statistically differentiate NYHA functional classes I, II, and III (P < .05). The estimated peak (Equation is included in full-text article.)O2 using the HAP was not concordant with the gold standard CPET measure. On the contrary, the HAP was able to differentiate NYHA functional class associated with the 6MWT distance; therefore, the HAP is a useful tool for assessing functional performance in patients with HF.
Flood of June 7-9, 2008, in Central and Southern Indiana
Morlock, Scott E.; Menke, Chad D.; Arvin, Donald V.; Kim, Moon H.
2008-01-01
On June 6-7, 2008, heavy rainfall of 2 to more than 10 inches fell upon saturated soils and added to already high streamflows from a wetter than normal spring in central and southern Indiana. The heavy rainfall resulted in severe flooding on many streams within the White River Basin during June 7-9, causing three deaths, evacuation of thousands of residents, and hundreds of millions of dollars of damage to residences, businesses, infrastructure, and agricultural lands. In all, 39 Indiana counties were declared Federal disaster areas. U.S. Geological Survey (USGS) streamgages at nine locations recorded new record peak streamflows for the respective periods of record as a result of the heavy rainfall. Recurrence intervals of flood-peak streamflows were estimated to be greater than 100 years at five streamgages and 50-100 years at two streamgages. Peak-gage-height data, peak-streamflow data, and recurrence intervals are tabulated for 19 USGS streamgages in central and southern Indiana. Peak-streamflow estimates are tabulated for four ungaged locations, and estimated recurrence intervals are tabulated for three ungaged locations. The estimated recurrence interval for an ungaged location on Haw Creek in Columbus was greater than 100 years and for an ungaged location on Hurricane Creek in Franklin was 50-100 years. Because flooding was particularly severe in the communities of Columbus, Edinburgh, Franklin, Paragon, Seymour, Spencer, Martinsville, Newberry, and Worthington, high-water-mark data collected after the flood were tabulated for those communities. Flood peak inundation maps and water-surface profiles for selected streams were made in a geographic information system by combining the high-water-mark data with the highest-resolution digital elevation model data available.
SENSITIVITY OF STRUCTURAL RESPONSE TO GROUND MOTION SOURCE AND SITE PARAMETERS.
Safak, Erdal; Brebbia, C.A.; Cakmak, A.S.; Abdel Ghaffar, A.M.
1985-01-01
Designing structures to withstand earthquakes requires an accurate estimation of the expected ground motion. While engineers use the peak ground acceleration (PGA) to model the strong ground motion, seismologists use physical characteristics of the source and the rupture mechanism, such as fault length, stress drop, shear wave velocity, seismic moment, distance, and attenuation. This study presents a method for calculating response spectra from seismological models using random vibration theory. It then investigates the effect of various source and site parameters on peak response. Calculations are based on a nonstationary stochastic ground motion model, which can incorporate all the parameters both in frequency and time domains. The estimation of the peak response accounts for the effects of the non-stationarity, bandwidth and peak correlations of the response.
High-resolution 129I bomb peak profile in an ice core from SE-Dome site, Greenland.
Bautista, Angel T; Miyake, Yasuto; Matsuzaki, Hiroyuki; Iizuka, Yoshinori; Horiuchi, Kazuho
2018-04-01
129 I in natural archives, such as ice cores, can be used as a proxy for human nuclear activities, age marker, and environmental tracer. Currently, there is only one published record of 129 I in ice core (i.e., from Fiescherhorn Glacier, Swiss Alps) and its limited time resolution (1-2 years) prevents the full use of 129 I for the mentioned applications. Here we show 129 I concentrations in an ice core from SE-Dome, Greenland, covering years 1956-1976 at a time resolution of ∼6 months, the most detailed record to date. Results revealed 129 I bomb peaks in years 1959, 1962, and 1963, associated to tests performed by the former Soviet Union, one year prior, in its Novaya Zemlya test site. All 129 I bomb peaks were observed in winter (1958.9, 1962.1, and 1963.0), while tritium bomb peaks, another prominent radionuclide associated with nuclear bomb testing, were observed in spring or summer (1959.3, and 1963.6; Iizuka et al., 2017). These results indicate that 129 I bomb peaks can be used as annual and seasonal age markers for these years. Furthermore, we found that 129 I recorded nuclear fuel reprocessing signals and that these can be potentially used to correct timing of estimated 129 I releases during years 1964-1976. Comparisons with other published records of 129 I in natural archives showed that 129 I can be used as common age marker and tracer for different types of records. Most notably, the 1963 129 I bomb peak can be used as common age marker for ice and coral cores, providing the means to reconcile age models and associated trends from the polar and tropical regions, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Lumbar Lordosis in Males and Females, Revisited
Hay, Ori; Dar, Gali; Abbas, Janan; Stein, Dan; May, Hila; Masharawi, Youssef; Peled, Nathan; Hershkovitz, Israel
2015-01-01
Background Whether differences exist in male and female lumbar lordosis has been debated by researchers who are divided as to the nature of variations in the spinal curve, their origin, reasoning, and implications from a morphological, functional and evolutionary perspective. Evaluation of the spinal curvature is constructive in understanding the evolution of the spine, as well as its pathology, planning of surgical procedures, monitoring its progression and treatment of spinal deformities. The aim of the current study was to revisit the nature of lumbar curve in males and females. Methods Our new automated method uses CT imaging of the spine to measure lumbar curvature in males and females. The curves extracted from 158 individuals were based on the spinal canal, thus avoiding traditional pitfalls of using bone features for curve estimation. The model analysis was carried out on the entire curve, whereby both local and global descriptors were examined in a single framework. Six parameters were calculated: segment length, curve length, curvedness, lordosis peak location, lordosis cranial peak height, and lordosis caudal peak height. Principal Findings Compared to males, the female spine manifested a statistically significant greater curvature, a caudally located lordotic peak, and greater cranial peak height. As caudal peak height is similar for males and females, the illusion of deeper lordosis among females is due partially to the fact that the upper part of the female lumbar curve is positioned more dorsally (more backwardly inclined). Conclusions Males and females manifest different lumbar curve shape, yet similar amount of inward curving (lordosis). The morphological characteristics of the female spine were probably developed to reduce stress on the vertebral elements during pregnancy and nursing. PMID:26301782
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
Linear error analysis of slope-area discharge determinations
Kirby, W.H.
1987-01-01
The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.
Design and testing of a magnetically driven implosion peak current diagnostic
NASA Astrophysics Data System (ADS)
Hess, M. H.; Peterson, K. J.; Ampleford, D. J.; Hutsel, B. T.; Jennings, C. A.; Gomez, M. R.; Dolan, D. H.; Robertson, G. K.; Payne, S. L.; Stygar, W. A.; Martin, M. R.; Sinars, D. B.
2018-04-01
A critical component of the magnetically driven implosion experiments at Sandia National Laboratories is the delivery of high-current, 10s of MA, from the Z pulsed power facility to a target. In order to assess the performance of the experiment, it is necessary to measure the current delivered to the target. Recent Magnetized Liner Inertial Fusion (MagLIF) experiments have included velocimetry diagnostics, such as PDV (Photonic Doppler Velocimetry) or Velocity Interferometer System for Any Reflector, in the final power feed section in order to infer the load current as a function of time. However, due to the nonlinear volumetrically distributed magnetic force within a velocimetry flyer, a complete time-dependent load current unfold is typically a time-intensive process and the uncertainties in the unfold can be difficult to assess. In this paper, we discuss how a PDV diagnostic can be simplified to obtain a peak current by sufficiently increasing the thickness of the flyer. This effectively keeps the magnetic force localized to the flyer surface, resulting in fast and highly accurate measurements of the peak load current. In addition, we show the results of experimental peak load current measurements from the PDV diagnostic in recent MagLIF experiments.
NASA Technical Reports Server (NTRS)
Phinney, D. E. (Principal Investigator)
1980-01-01
An algorithm for estimating spectral crop calendar shifts of spring small grains was applied to 1978 spring wheat fields. The algorithm provides estimates of the date of peak spectral response by maximizing the cross correlation between a reference profile and the observed multitemporal pattern of Kauth-Thomas greenness for a field. A methodology was developed for estimation of crop development stage from the date of peak spectral response. Evaluation studies showed that the algorithm provided stable estimates with no geographical bias. Crop development stage estimates had a root mean square error near 10 days. The algorithm was recommended for comparative testing against other models which are candidates for use in AgRISTARS experiments.
Tidally oriented vertical migration and position maintenance of zooplankton in a temperate estuary
Kimmerer, W.J.; Burau, J.R.; Bennett, W.A.
1998-01-01
In many estuaries, maxima in turbidity and abundance of several common species of zooplankton occur in the low salinity zone (LSZ) in the range of 0.5-6 practical salinity units (psu). Analysis of zooplankton abundance from monitoring in 1972-1987 revealed that historical maxima in abundance of the copepod Eurytemora affinis and the mysid Neomysis mercedis, and in turbidity as determined from Secchi disk data, were close to the estimated position of 2 psu bottom salinity. The copepod Sinocalanus doerrii had a maximum slightly landward of that of E. affinis. After 1987 these maxima decreased and shifted to a lower salinity, presumably because of the effects of grazing by the introduced clam Potamocorbula amurensis. At the same time, the copepod Pseudodiaptomus forbesi, the mysid Acanthomysis sp., and amphipods became abundant with peaks at salinity around 0.2-0.5 psu. Plausible mechanisms for maintenance of these persistent abundance peaks include interactions between variation in flow and abundance, either in the vertical or horizontal plane, or higher net population growth rate in the peaks than seaward of the peaks. In spring of 1994, a dry year, we sampled in and near the LSZ using a Lagrangian sampling scheme to follow selected isohalines while sampling over several complete tidal cycles. Acoustic Doppler current profilers were used to provide detailed velocity distributions to enable us to estimate longitudinal fluxes of organisms. Stratification was weak and gravitational circulation nearly absent in the LSZ. All of the common species of zooplankton migrated vertically in response to the tides, with abundance higher in the water column on the flood than on the ebb. Migration of mysids and amphipods was sufficient to override net seaward flow to produce a net landward flux of organisms. Migration of copepods, however, was insufficient to reverse or even greatly diminish the net seaward flux of organisms, implying alternative mechanisms of position maintenance.
Computational fluid dynamics simulations of the Late Pleistocene Lake Bonneville flood
Abril-Hernández, José M.; Periáñez, Raúl; O'Connor, Jim E.; Garcia-Castellanos, Daniel
2018-01-01
At approximately 18.0 ka, pluvial Lake Bonneville reached its maximum level. At its northeastern extent it was impounded by alluvium of the Marsh Creek Fan, which breached at some point north of Red Rock Pass (Idaho), leading to one of the largest floods on Earth. About 5320 km3 of water was discharged into the Snake River drainage and ultimately into the Columbia River. We use a 0D model and a 2D non-linear depth-averaged hydrodynamic model to aid understanding of outflow dynamics, specifically evaluating controls on the amount of water exiting the Lake Bonneville basin exerted by the Red Rock Pass outlet lithology and geometry as well as those imposed by the internal lake geometry of the Bonneville basin. These models are based on field evidence of prominent lake levels, hypsometry and terrain elevations corrected for post-flood isostatic deformation of the lake basin, as well as reconstructions of the topography at the outlet for both the initial and final stages of the flood. Internal flow dynamics in the northern Lake Bonneville basin during the flood were affected by the narrow passages separating the Cache Valley from the main body of Lake Bonneville. This constriction imposed a water-level drop of up to 2.7 m at the time of peak-flow conditions and likely reduced the peak discharge at the lake outlet by about 6%. The modeled peak outlet flow is 0.85·106 m3 s−1. Energy balance calculations give an estimate for the erodibility coefficient for the alluvial Marsh Creek divide of ∼0.005 m y−1 Pa−1.5, at least two orders of magnitude greater than for the underlying bedrock at the outlet. Computing quasi steady-state water flows, water elevations, water currents and shear stresses as a function of the water-level drop in the lake and for the sequential stages of erosion in the outlet gives estimates of the incision rates and an estimate of the outflow hydrograph during the Bonneville Flood: About 18 days would have been required for the outflow to grow from 10% to 100% of its peak value. At the time of peak flow, about 10% of the lake volume would have already exited; eroding about 1 km3 of alluvium from the outlet, and the lake level would have dropped by about 10.6 m.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trenberth, Kevin E.; Fasullo, John T.
The Atlantic Meridional Overturning Circulation plays a major role in moving heat and carbon around in the ocean. A new estimate of ocean heat transports for 2000 through 2013 throughout the Atlantic is derived. Top-of-atmosphere radiation is combined with atmospheric reanalyses to estimate surface heat fluxes and combined with vertically integrated ocean heat content to estimate ocean heat transport divergence as a residual. Atlantic peak northward ocean heat transports average 1.18 ± 0.13PW (1 sigma) at 15°N but vary considerably in latitude and time. Results agree well with observational estimates at 26.5°N from the RAPID array, but for 2004–2013 themore » meridional heat transport is 1.00 ± 0.11PW versus 1.23 ± 0.11PW for RAPID. In addition, these results have no hint of a trend, unlike the RAPID results. Finally, strong westerlies north of a meridian drive ocean currents and an ocean heat loss into the atmosphere that is exacerbated by a decrease in ocean heat transport northward.« less
Trenberth, Kevin E.; Fasullo, John T.
2017-02-18
The Atlantic Meridional Overturning Circulation plays a major role in moving heat and carbon around in the ocean. A new estimate of ocean heat transports for 2000 through 2013 throughout the Atlantic is derived. Top-of-atmosphere radiation is combined with atmospheric reanalyses to estimate surface heat fluxes and combined with vertically integrated ocean heat content to estimate ocean heat transport divergence as a residual. Atlantic peak northward ocean heat transports average 1.18 ± 0.13PW (1 sigma) at 15°N but vary considerably in latitude and time. Results agree well with observational estimates at 26.5°N from the RAPID array, but for 2004–2013 themore » meridional heat transport is 1.00 ± 0.11PW versus 1.23 ± 0.11PW for RAPID. In addition, these results have no hint of a trend, unlike the RAPID results. Finally, strong westerlies north of a meridian drive ocean currents and an ocean heat loss into the atmosphere that is exacerbated by a decrease in ocean heat transport northward.« less
STREAMFLOW LOSSES IN THE SANTA CRUZ RIVER, ARIZONA.
Aldridge, B.N.
1985-01-01
The discharge and volume of flow in a peak decrease as the peak moves through an 89-mile (143 km) reach of the Santa Cruz River. An average of three peaks per year flow the length of the reach. Of 17,500 acre-ft (21,600 dam**3) that entered the upstream end of the reach, 2300 acre-ft (2,840 dam**3), 13 percent of the inflow, left the reach as streamflow. The remainder was lost through infiltration. Losses in a reach of channel were estimated by relating losses to the discharge at the upstream end of the reach. Tributary inflow was estimated through the use of synthesized duration curves. Streamflow losses along mountain fronts were estimated through the use of an electric analog model and by relating losses shown by the model to the median altitude of the contributing area.
NASA Astrophysics Data System (ADS)
Wilkman, E.; Zona, D.; Tang, Y.; Gioli, B.; Lipson, D.; Oechel, W. C.
2017-12-01
The response of ecosystem respiration to warming in the Arctic is not well constrained, partly due to the presence of ice-wedge polygons in continuous permafrost areas. These formations lead to substantial variation in vegetation, soil moisture, water table, and active layer depth over the meter scale that can drive respiratory carbon loss. Accurate calculations of in-situ temperature sensitivities (Q10) are vital for the prediction of future Arctic emissions, and while the eddy covariance technique has commonly been used to determine the diurnal and season patterns of net ecosystem exchange (NEE) of CO2, the lack of suitable dark periods in the Arctic summer has limited our ability to estimate and interpret ecosystem respiration. To therefore improve our understanding of and define controls on ecosystem respiration, we directly compared CO2 fluxes measured from automated chambers across the main local polygonised landscape forms (high and low centers, polygon rims, and polygon troughs) to estimates from an adjacent eddy covariance tower. Low-centered polygons and polygon troughs had the greatest cumulative respiration rates, and ecosystem type appeared to be the most important explanatory variable for these rates. Despite the difference in absolute respiration rates, Q10 was surprisingly similar across all microtopographic features, despite contrasting water levels and vegetation types. Conversely, Q10 varied temporally, with higher values during the early and late summer and lower values during the peak growing season. Finally, good agreement was found between chamber and tower based Q10 estimates during the peak growing season. Overall, this study suggests that it is possible to simplify estimates of the temperature sensitivity of respiration across heterogeneous landscapes, but that seasonal changes in Q10 should be incorporated into current and future model simulations.
NASA Astrophysics Data System (ADS)
Allen, D. M.; Henry, C.; Demon, H.; Kirste, D. M.; Huang, J.
2011-12-01
Sustainable management of groundwater resources, particularly in water stressed regions, requires estimates of groundwater recharge. This study in southern Mali, Africa compares approaches for estimating groundwater recharge and understanding recharge processes using a variety of methods encompassing groundwater level-climate data analysis, GRACE satellite data analysis, and recharge modelling for current and future climate conditions. Time series data for GRACE (2002-2006) and observed groundwater level data (1982-2001) do not overlap. To overcome this problem, GRACE time series data were appended to the observed historical time series data, and the records compared. Terrestrial water storage anomalies from GRACE were corrected for soil moisture (SM) using the Global Land Data Assimilation System (GLDAS) to obtain monthly groundwater storage anomalies (GRACE-SM), and monthly recharge estimates. Historical groundwater storage anomalies and recharge were determined using the water table fluctuation method using observation data from 15 wells. Historical annual recharge averaged 145.0 mm (or 15.9% of annual rainfall) and compared favourably with the GRACE-SM estimate of 149.7 mm (or 14.8% of annual rainfall). Both records show lows and peaks in May and September, respectively; however, the peak for the GRACE-SM data is shifted later in the year to November, suggesting that the GLDAS may poorly predict the timing of soil water storage in this region. Recharge simulation results show good agreement between the timing and magnitude of the mean monthly simulated recharge and the regional mean monthly storage anomaly hydrograph generated from all monitoring wells. Under future climate conditions, annual recharge is projected to decrease by 8% for areas with luvisols and by 11% for areas with nitosols. Given this potential reduction in groundwater recharge, there may be added stress placed on an already stressed resource.
Development of magnitude scaling relationship for earthquake early warning system in South Korea
NASA Astrophysics Data System (ADS)
Sheen, D.
2011-12-01
Seismicity in South Korea is low and magnitudes of recent earthquakes are mostly less than 4.0. However, historical earthquakes of South Korea reveal that many damaging earthquakes had occurred in the Korean Peninsula. To mitigate potential seismic hazard in the Korean Peninsula, earthquake early warning (EEW) system is being installed and will be operated in South Korea in the near future. In order to deliver early warnings successfully, it is very important to develop stable magnitude scaling relationships. In this study, two empirical magnitude relationships are developed from 350 events ranging in magnitude from 2.0 to 5.0 recorded by the KMA and the KIGAM. 1606 vertical component seismograms whose epicentral distances are within 100 km are chosen. The peak amplitude and the maximum predominant period of the initial P wave are used for finding magnitude relationships. The peak displacement of seismogram recorded at a broadband seismometer shows less scatter than the peak velocity of that. The scatters of the peak displacement and the peak velocity of accelerogram are similar to each other. The peak displacement of seismogram differs from that of accelerogram, which means that two different magnitude relationships for each type of data should be developed. The maximum predominant period of the initial P wave is estimated after using two low-pass filters, 3 Hz and 10 Hz, and 10 Hz low-pass filter yields better estimate than 3 Hz. It is found that most of the peak amplitude and the maximum predominant period are estimated within 1 sec after triggering.
NASA Astrophysics Data System (ADS)
Yi, Wen; Xue, Xianghui; Reid, Iain M.; Younger, Joel P.; Chen, Jinsong; Chen, Tingdi; Li, Na
2018-04-01
Neutral mesospheric densities at a low latitude have been derived during April 2011 to December 2014 using data from the Kunming meteor radar in China (25.6°N, 103.8°E). The daily mean density at 90 km was estimated using the ambipolar diffusion coefficients from the meteor radar and temperatures from the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument. The seasonal variations of the meteor radar-derived density are consistent with the density from the Mass Spectrometer and Incoherent Scatter (MSIS) model, show a dominant annual variation, with a maximum during winter, and a minimum during summer. A simple linear model was used to separate the effects of atmospheric density and the meteor velocity on the meteor radar peak detection height. We find that a 1 km/s difference in the vertical meteor velocity yields a change of approximately 0.42 km in peak height. The strong correlation between the meteor radar density and the velocity-corrected peak height indicates that the meteor radar density estimates accurately reflect changes in neutral atmospheric density and that meteor peak detection heights, when adjusted for meteoroid velocity, can serve as a convenient tool for measuring density variations around the mesopause. A comparison of the ambipolar diffusion coefficient and peak height observed simultaneously by two co-located meteor radars indicates that the relative errors of the daily mean ambipolar diffusion coefficient and peak height should be less than 5% and 6%, respectively, and that the absolute error of the peak height is less than 0.2 km.
R Peak Detection Method Using Wavelet Transform and Modified Shannon Energy Envelope
2017-01-01
Rapid automatic detection of the fiducial points—namely, the P wave, QRS complex, and T wave—is necessary for early detection of cardiovascular diseases (CVDs). In this paper, we present an R peak detection method using the wavelet transform (WT) and a modified Shannon energy envelope (SEE) for rapid ECG analysis. The proposed WTSEE algorithm performs a wavelet transform to reduce the size and noise of ECG signals and creates SEE after first-order differentiation and amplitude normalization. Subsequently, the peak energy envelope (PEE) is extracted from the SEE. Then, R peaks are estimated from the PEE, and the estimated peaks are adjusted from the input ECG. Finally, the algorithm generates the final R features by validating R-R intervals and updating the extracted R peaks. The proposed R peak detection method was validated using 48 first-channel ECG records of the MIT-BIH arrhythmia database with a sensitivity of 99.93%, positive predictability of 99.91%, detection error rate of 0.16%, and accuracy of 99.84%. Considering the high detection accuracy and fast processing speed due to the wavelet transform applied before calculating SEE, the proposed method is highly effective for real-time applications in early detection of CVDs. PMID:29065613
R Peak Detection Method Using Wavelet Transform and Modified Shannon Energy Envelope.
Park, Jeong-Seon; Lee, Sang-Woong; Park, Unsang
2017-01-01
Rapid automatic detection of the fiducial points-namely, the P wave, QRS complex, and T wave-is necessary for early detection of cardiovascular diseases (CVDs). In this paper, we present an R peak detection method using the wavelet transform (WT) and a modified Shannon energy envelope (SEE) for rapid ECG analysis. The proposed WTSEE algorithm performs a wavelet transform to reduce the size and noise of ECG signals and creates SEE after first-order differentiation and amplitude normalization. Subsequently, the peak energy envelope (PEE) is extracted from the SEE. Then, R peaks are estimated from the PEE, and the estimated peaks are adjusted from the input ECG. Finally, the algorithm generates the final R features by validating R-R intervals and updating the extracted R peaks. The proposed R peak detection method was validated using 48 first-channel ECG records of the MIT-BIH arrhythmia database with a sensitivity of 99.93%, positive predictability of 99.91%, detection error rate of 0.16%, and accuracy of 99.84%. Considering the high detection accuracy and fast processing speed due to the wavelet transform applied before calculating SEE, the proposed method is highly effective for real-time applications in early detection of CVDs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruth, M.; Pratt, A.; Lunacek, M.
The combination of distributed energy resources (DER) and retail tariff structures to provide benefits to both utility consumers and the utilities is not well understood. To improve understanding, an Integrated Energy System Model (IESM) is being developed to simulate the physical and economic aspects of DER technologies, the buildings where they reside, and feeders servicing them. The IESM was used to simulate 20 houses with home energy management systems on a single feeder under a time-of-use (TOU) tariff to estimate economic and physical impacts on both the households and the distribution utilities. Home energy management systems (HEMS) reduce consumers’ electricmore » bills by precooling houses in the hours before peak electricity pricing. Utilization of HEMS reduce peak loads during high price hours but shifts it to hours with off-peak and shoulder prices, resulting in a higher peak load. used to simulate 20 houses with home energy management systems on a single feeder under a time-of-use (TOU) tariff to estimate economic and physical impacts on both the households and the distribution utilities. Home energy management systems (HEMS) reduce consumers’ electric bills by precooling houses in the hours before peak electricity pricing. Utilization of HEMS reduce peak loads during high price hours but shifts it to hours with off-peak and shoulder prices, resulting in a higher peak load.« less
Adam, Asrul; Mohd Tumari, Mohd Zaidi; Mohamad, Mohd Saberi
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236
Nanomolar Trace Metal Analysis of Copper at Gold Microband Arrays
NASA Astrophysics Data System (ADS)
Wahl, A.; Dawson, K.; Sassiat, N.; Quinn, A. J.; O'Riordan, A.
2011-08-01
This paper describes the fabrication and electrochemical characterization of gold microband electrode arrays designated as a highly sensitive sensor for trace metal detection of copper in drinking water samples. Gold microband electrodes have been routinely fabricated by standard photolithographic methods. Electrochemical characterization were conducted in 0.1 M H2SO4 and found to display characteristic gold oxide formation and reduction peaks. The advantages of gold microband electrodes as trace metal sensors over currently used methods have been investigated by employing under potential deposition anodic stripping voltammetry (UPD-ASV) in Cu2+ nanomolar concentrations. Linear correlations were observed for increasing Cu2+ concentrations from which the concentration of an unknown sample of drinking water was estimated. The results obtained for the estimation of the unknown trace copper concentration in drinking was in good agreement with expected values.
Paretti, Nicholas V.; Kennedy, Jeffrey R.; Turney, Lovina A.; Veilleux, Andrea G.
2014-01-01
The regional regression equations were integrated into the U.S. Geological Survey’s StreamStats program. The StreamStats program is a national map-based web application that allows the public to easily access published flood frequency and basin characteristic statistics. The interactive web application allows a user to select a point within a watershed (gaged or ungaged) and retrieve flood-frequency estimates derived from the current regional regression equations and geographic information system data within the selected basin. StreamStats provides users with an efficient and accurate means for retrieving the most up to date flood frequency and basin characteristic data. StreamStats is intended to provide consistent statistics, minimize user error, and reduce the need for large datasets and costly geographic information system software.
Techniques for estimating flood-peak discharges from urban basins in Missouri
Becker, L.D.
1986-01-01
Techniques are defined for estimating the magnitude and frequency of future flood peak discharges of rainfall-induced runoff from small urban basins in Missouri. These techniques were developed from an initial analysis of flood records of 96 gaged sites in Missouri and adjacent states. Final regression equations are based on a balanced, representative sampling of 37 gaged sites in Missouri. This sample included 9 statewide urban study sites, 18 urban sites in St. Louis County, and 10 predominantly rural sites statewide. Short-term records were extended on the basis of long-term climatic records and use of a rainfall-runoff model. Linear least-squares regression analyses were used with log-transformed variables to relate flood magnitudes of selected recurrence intervals (dependent variables) to selected drainage basin indexes (independent variables). For gaged urban study sites within the State, the flood peak estimates are from the frequency curves defined from the synthesized long-term discharge records. Flood frequency estimates are made for ungaged sites by using regression equations that require determination of the drainage basin size and either the percentage of impervious area or a basin development factor. Alternative sets of equations are given for the 2-, 5-, 10-, 25-, 50-, and 100-yr recurrence interval floods. The average standard errors of estimate range from about 33% for the 2-yr flood to 26% for the 100-yr flood. The techniques for estimation are applicable to flood flows that are not significantly affected by storage caused by manmade activities. Flood peak discharge estimating equations are considered applicable for sites on basins draining approximately 0.25 to 40 sq mi. (Author 's abstract)
Hubbert's Peak: A Physicist's View
NASA Astrophysics Data System (ADS)
McDonald, Richard
2011-11-01
Oil and its by-products, as used in manufacturing, agriculture, and transportation, are the lifeblood of today's 7 billion-person population and our 65T world economy. Despite this importance, estimates of future oil production seem dominated by wishful thinking rather than quantitative analysis. Better studies are needed. In 1956, Dr. M.King Hubbert proposed a theory of resource production and applied it successfully to predict peak U.S. oil production in 1970. Thus, the peak of oil production is referred to as ``Hubbert's Peak.'' Prof. Al Bartlett extended this work in publications and lectures on population and oil. Both Hubbert and Bartlett place peak world oil production at a similar time, essentially now. This paper extends this line of work to include analyses of individual countries, inclusion of multiple Gaussian peaks, and analysis of reserves data. While this is not strictly a predictive theory, we will demonstrate a ``closed'' story connecting production, oil-in-place, and reserves. This gives us the ``most likely'' estimate of future oil availability. Finally, we will comment on synthetic oil and the possibility of carbon-neutral synthetic oil for a sustainable future.
Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H
2016-04-11
Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load. Copyright © 2015 Elsevier Ltd. All rights reserved.
Derrick, Timothy R; Edwards, W Brent; Fellin, Rebecca E; Seay, Joseph F
2016-02-08
The purpose of this research was to utilize a series of models to estimate the stress in a cross section of the tibia, located 62% from the proximal end, during walking. Twenty-eight male, active duty soldiers walked on an instrumented treadmill while external force data and kinematics were recorded. A rigid body model was used to estimate joint moments and reaction forces. A musculoskeletal model was used to gather muscle length, muscle velocity, moment arm and orientation information. Optimization procedures were used to estimate muscle forces and finally internal bone forces and moments were applied to an inhomogeneous, subject specific bone model obtained from CT scans to estimate stress in the bone cross section. Validity was assessed by comparison to stresses calculated from strain gage data in the literature and sensitivity was investigated using two simplified versions of the bone model-a homogeneous model and an ellipse approximation. Peak compressive stress occurred on the posterior aspect of the cross section (-47.5 ± 14.9 MPa). Peak tensile stress occurred on the anterior aspect (27.0 ± 11.7 MPa) while the location of peak shear was variable between subjects (7.2 ± 2.4 MPa). Peak compressive, tensile and shear stresses were within 0.52 MPa, 0.36 MPa and 3.02 MPa respectively of those calculated from the converted strain gage data. Peak values from a inhomogeneous model of the bone correlated well with homogeneous model (normal: 0.99; shear: 0.94) as did the normal ellipse model (r=0.89-0.96). However, the relationship between shear stress in the inhomogeneous model and ellipse model was less accurate (r=0.64). The procedures detailed in this paper provide a non-invasive and relatively quick method of estimating cross sectional stress that holds promise for assessing injury and osteogenic stimulus in bone during normal physical activity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Annual peak discharges from small drainage areas in Montana through September 1976
Johnson, M.V.; Omang, R.J.; Hull, J.A.
1977-01-01
Annual peak discharge from small drainage areas is tabulated for 336 sites in Montana. The 1976 additions included data collected at 206 sites. The program which investigates the magnitude and frequency of floods from small drainage areas in Montana, was begun July 1, 1955. Originally 45 crest-stage gaging stations were established. The purpose of the program is to collect sufficient peak-flow data, which through analysis could provide methods for estimating the magnitude and frequency of floods at any point in Montana. The ultimate objective is to provide methods for estimating the 100-year flood with the reliability needed for road design. (Woodard-USGS)
NASA Astrophysics Data System (ADS)
van Lien, René; Schutte, Nienke M.; Meijer, Jan H.; de Geus, Eco J. C.
2013-04-01
The validity of estimating the PEP from a fixed value for the Q-wave onset to the R-wave peak (QR) interval and from the R-wave peak to the dZ/dt-min peak (ISTI) interval is evaluated. Ninety-one subjects participated in a laboratory experiment in which a variety of physical and mental stressors were presented and 31 further subjects participated in a sequence of structured ambulatory activities in which large variation in posture and physical activity was induced. PEP, QR interval, and ISTI were scored. Across the diverse laboratory and ambulatory conditions the QR interval could be approximated by a fixed interval of 40 ms but 95% confidence intervals were large (25 to 54 ms). Multilevel analysis showed that 79% to 81% of the within and between-subject variation in the RB interval could be predicted by the ISTI. However, the optimal intercept and slope values varied significantly across subjects and study setting. Bland-Altman plots revealed a large discrepancy between the estimated PEP and the actual PEP based on the Q-wave onset and B-point. It is concluded that the estimated PEP can be a useful tool but cannot replace the actual PEP to index cardiac sympathetic control.
Strobbe, Gregor; Carrette, Evelien; López, José David; Montes Restrepo, Victoria; Van Roost, Dirk; Meurs, Alfred; Vonck, Kristl; Boon, Paul; Vandenberghe, Stefaan; van Mierlo, Pieter
2016-01-01
Electrical source imaging of interictal spikes observed in EEG recordings of patients with refractory epilepsy provides useful information to localize the epileptogenic focus during the presurgical evaluation. However, the selection of the time points or time epochs of the spikes in order to estimate the origin of the activity remains a challenge. In this study, we consider a Bayesian EEG source imaging technique for distributed sources, i.e. the multiple volumetric sparse priors (MSVP) approach. The approach allows to estimate the time courses of the intensity of the sources corresponding with a specific time epoch of the spike. Based on presurgical averaged interictal spikes in six patients who were successfully treated with surgery, we estimated the time courses of the source intensities for three different time epochs: (i) an epoch starting 50 ms before the spike peak and ending at 50% of the spike peak during the rising phase of the spike, (ii) an epoch starting 50 ms before the spike peak and ending at the spike peak and (iii) an epoch containing the full spike time period starting 50 ms before the spike peak and ending 230 ms after the spike peak. To identify the primary source of the spike activity, the source with the maximum energy from 50 ms before the spike peak till 50% of the spike peak was subsequently selected for each of the time windows. For comparison, the activity at the spike peaks and at 50% of the peaks was localized using the LORETA inversion technique and an ECD approach. Both patient-specific spherical forward models and patient-specific 5-layered finite difference models were considered to evaluate the influence of the forward model. Based on the resected zones in each of the patients, extracted from post-operative MR images, we compared the distances to the resection border of the estimated activity. Using the spherical models, the distances to the resection border for the MSVP approach and each of the different time epochs were in the same range as the LORETA and ECD techniques. We found distances smaller than 23 mm, with robust results for all the patients. For the finite difference models, we found that the distances to the resection border for the MSVP inversions of the full spike time epochs were generally smaller compared to the MSVP inversions of the time epochs before the spike peak. The results also suggest that the inversions using the finite difference models resulted in slightly smaller distances to the resection border compared to the spherical models. The results we obtained are promising because the MSVP approach allows to study the network of the estimated source-intensities and allows to characterize the spatial extent of the underlying sources. PMID:26958464
Correlated observations of three triggered lightning flashes
NASA Technical Reports Server (NTRS)
Idone, V. P.; Orville, R. E.; Hubert, P.; Barret, L.; Eybert-Berard, A.
1984-01-01
Three triggered lightning flashes, initiated during the Thunderstorm Research International Program (1981) at Langmuir Laboratory, New Mexico, are examined on the basis of three-dimensional return stroke propagation speeds and peak currents. Nonlinear relationships result between return stroke propagation speed and stroke peak current for 56 strokes, and between return stroke propagation speed and dart leader propagation speed for 32 strokes. Calculated linear correlation coefficients include dart leader propagation speed and ensuing return stroke peak current (32 strokes; r = 0.84); and stroke peak current and interstroke interval (69 strokes; r = 0.57). Earlier natural lightning data do not concur with the weak positive correlation between dart leader propagation speed and interstroke interval. Therefore, application of triggered lightning results to natural lightning phenomena must be made with certain caveats. Mean values are included for the three-dimensional return stroke propagation speed and for the three-dimensional dart leader propagation speed.
On the role of modeling choices in estimation of cerebral aneurysm wall tension.
Ramachandran, Manasi; Laakso, Aki; Harbaugh, Robert E; Raghavan, Madhavan L
2012-11-15
To assess various approaches to estimating pressure-induced wall tension in intracranial aneurysms (IA) and their effect on the stratification of subjects in a study population. Three-dimensional models of 26 IAs (9 ruptured and 17 unruptured) were segmented from Computed Tomography Angiography (CTA) images. Wall tension distributions in these patient-specific geometric models were estimated based on various approaches such as differences in morphological detail utilized or modeling choices made. For all subjects in the study population, the peak wall tension was estimated using all investigated approaches and were compared to a reference approach-nonlinear finite element (FE) analysis using the Fung anisotropic model with regionally varying material fiber directions. Comparisons between approaches were focused toward assessing the similarity in stratification of IAs within the population based on peak wall tension. The stratification of IAs tension deviated to some extent from the reference approach as less geometric detail was incorporated. Interestingly, the size of the cerebral aneurysm as captured by a single size measure was the predominant determinant of peak wall tension-based stratification. Within FE approaches, simplifications to isotropy, material linearity and geometric linearity caused a gradual deviation from the reference estimates, but it was minimal and resulted in little to no impact on stratifications of IAs. Differences in modeling choices made without patient-specificity in parameters of such models had little impact on tension-based IA stratification in this population. Increasing morphological detail did impact the estimated peak wall tension, but size was the predominant determinant. Copyright © 2012 Elsevier Ltd. All rights reserved.
Hortness, J.E.
2004-01-01
The U.S. Geological Survey (USGS) measures discharge in streams using several methods. However, measurement of peak discharges is often impossible or impractical due to difficult access, inherent danger of making measurements during flood events, and timing often associated with flood events. Thus, many peak discharge values often are calculated after the fact by use of indirect methods. The most common indirect method for estimating peak dis- charges in streams is the slope-area method. This, like other indirect methods, requires measuring the flood profile through detailed surveys. Processing the survey data for efficient entry into computer streamflow models can be time demanding; SAM 2.1 is a program designed to expedite that process. The SAM 2.1 computer program is designed to be run in the field on a portable computer. The program processes digital surveying data obtained from an electronic surveying instrument during slope- area measurements. After all measurements have been completed, the program generates files to be input into the SAC (Slope-Area Computation program; Fulford, 1994) or HEC-RAS (Hydrologic Engineering Center-River Analysis System; Brunner, 2001) computer streamflow models so that an estimate of the peak discharge can be calculated.
Selection of patients for heart transplantation in the current era of heart failure therapy.
Butler, Javed; Khadim, Ghazanfar; Paul, Kimberly M; Davis, Stacy F; Kronenberg, Marvin W; Chomsky, Don B; Pierson, Richard N; Wilson, John R
2004-03-03
We sought to assess the relationship between survival, peak exercise oxygen consumption (VO(2)), and heart failure survival score (HFSS) in the current era of heart failure (HF) therapy. Based on predicted survival, HF patients with peak VO(2) <14 ml/min/kg or medium- to high-risk HFSS are currently considered eligible for heart transplantation. However, these criteria were developed before the widespread use of beta-blockers, spironolactone, and defibrillators-interventions known to improve the survival of HF patients. Peak VO(2) and HFSS were assessed in 320 patients followed from 1994 to 1997 (past era) and in 187 patients followed from 1999 to 2001 (current era). Outcomes were compared between these two groups of patients and those who underwent heart transplantation from 1993 to 2000. Survival in the past era was 78% at one year and 67% at two years, as compared with 88% and 79%, respectively, in the current era (both p < 0.01). One-year event-free survival (without urgent transplantation or left ventricular assist device) was improved in the current era, regardless of initial peak VO(2): 64% vs. 48% for peak VO(2) <10 ml/min/kg (p = 0.09), 81% vs. 70% for 10 to 14 ml/min/kg (p = 0.05), and 93% vs. 82% for >14 ml/min/kg (p = 0.04). Of the patients with peak VO(2) of 10 to 14 ml/min/kg, 55% had low-risk HFSS and exhibited 88% one-year event-free survival. One-year survival after transplantation was 88%, which is similar to the 85% rate reported by the United Network for Organ Sharing for 1999 to 2000. Survival for HF patients in the current era has improved significantly, necessitating re-evaluation of the listing criteria for heart transplantation.
Evidence of negative leaders which precede fast rise ICC pulses of upward
NASA Astrophysics Data System (ADS)
Yoshida, S.; Akita, M.; Morimoto, T.; Ushio, T.; Kawasaki, Z.; Wang, D.; Takagi, N.
2008-12-01
During winter thunderstorm season in Japan, a lightning observation campaign was conducted with using a VHF broadband digital interferometer (DITF), a capacitive antenna, and Rogowski coils to study the charge transfer mechanism associated with ICC pulses of upward lightning. All the detection systems recorded one upward negative lightning stroke hitting a lightning protection tower. The upward lightning consists of only the Initial Stage (IS) with one upward positive leader and six ICC pulses. The six ICC pulses are sub-classified clearly into two types according to current pulse shapes. The type 1 ICC pulses have a higher geometric mean (GM) current peak of 17 kA and a shorter GM 10-90% risetime of 8.9 μs, while the type 2 ICC pulses have a lower GM current peak of 0.34 kA and longer GM 10-90% risetime of 55 μs. The type 1 ICC pulses have the preceding negative leaders connecting to the channel of the continuing current, while the type 2 ICC pulses have no clear preceding negative leader. These negative leaders prior to the type 1 ICC pulses probably caused the current increases of the ICC pulses, which means that the negative leaders created the channels for the ICC pulses. The height of the space charge transferred by one of the type 1 ICC pulses was estimated about 700 m above sea level at most. This observation result is the first evidence to show explicitly the existence of the negative leaders prior to the fast rise ICC pulse. Furthermore, the result shows that space charge could exist at a low attitude such as 700 m above sea level. This fact is one of the reasons why upward lightning occurs even from rather low structures during winter thunderstorm season in Japan.
Schwartz, D.P.; Joyner, W.B.; Stein, R.S.; Brown, R.D.; McGarr, A.F.; Hickman, S.H.; Bakun, W.H.
1996-01-01
Summary -- The U.S. Geological Survey was requested by the U.S. Department of the Interior to review the design values and the issue of reservoir-induced seismicity for a concrete gravity dam near the site of the previously-proposed Auburn Dam in the western foothills of the Sierra Nevada, central California. The dam is being planned as a flood-control-only dam with the possibility of conversion to a permanent water-storage facility. As a basis for planning studies the U.S. Army Corps of Engineers is using the same design values approved by the Secretary of the Interior in 1979 for the original Auburn Dam. These values were a maximum displacement of 9 inches on a fault intersecting the dam foundation, a maximum earthquake at the site of magnitude 6.5, a peak horizontal acceleration of 0.64 g, and a peak vertical acceleration of 0.39 g. In light of geological and seismological investigations conducted in the western Sierran foothills since 1979 and advances in the understanding of how earthquakes are caused and how faults behave, we have developed the following conclusions and recommendations: Maximum Displacement. Neither the pre-1979 nor the recent observations of faults in the Sierran foothills precisely define the maximum displacement per event on a fault intersecting the dam foundation. Available field data and our current understanding of surface faulting indicate a range of values for the maximum displacement. This may require the consideration of a design value larger than 9 inches. We recommend reevaluation of the design displacement using current seismic hazard methods that incorporate uncertainty into the estimate of this design value. Maximum Earthquake Magnitude. There are no data to indicate that a significant change is necessary in the use of an M 6.5 maximum earthquake to estimate design ground motions at the dam site. However, there is a basis for estimating a range of maximum magnitudes using recent field information and new statistical fault relations. We recommend reevaluating the maximum earthquake magnitude using current seismic hazard methodology. Design Ground Motions. A large number of strong-motion records have been acquired and significant advances in understanding of ground motion have been achieved since the original evaluations. The design value for peak horizontal acceleration (0.64 g) is larger than the median of one recent study and smaller than the median value of another. The value for peak vertical acceleration (0.39 g) is somewhat smaller than median values of two recent studies. We recommend a reevaluation of the design ground motions that takes into account new ground motion data with particular attention to rock sites at small source distances. Reservoir-Induced Seismicity. The potential for reservoir-induced seismicity must be considered for the Auburn Darn project. A reservoir-induced earthquake is not expected to be larger than the maximum naturally occurring earthquake. However, the probability of an earthquake may be enhanced by reservoir impoundment. A flood-control-only project may involve a lower probability of significant induced seismicity than a multipurpose water-storage dam. There is a need to better understand and quantify the likelihood of this hazard. A methodology should be developed to quantify the potential for reservoir induced seismicity using seismicity data from the Sierran foothills, new worldwide observations of induced and triggered seismicity, and current understanding of the earthquake process. Reevaluation of Design Parameters. The reevaluation of the maximum displacement, maximum magnitude earthquake, and design ground motions can be made using available field observations from the Sierran foothills, updated statistical relations for faulting and ground motions, and current computational seismic hazard methodologies that incorporate uncertainty into the analysis. The reevaluation does not require significant new geological field studies.
Briggs, Adam D M; Scarborough, Peter; Wolstenholme, Jane
2018-01-01
Healthcare interventions, and particularly those in public health may affect multiple diseases and significantly prolong life. No consensus currently exists for how to estimate comparable healthcare costs across multiple diseases for use in health and public health cost-effectiveness models. We aim to describe a method for estimating comparable disease specific English healthcare costs as well as future healthcare costs from diseases unrelated to those modelled. We use routine national datasets including programme budgeting data and cost curves from NHS England to estimate annual per person costs for diseases included in the PRIMEtime model as well as age and sex specific costs due to unrelated diseases. The 2013/14 annual cost to NHS England per prevalent case varied between £3,074 for pancreatic cancer and £314 for liver disease. Costs due to unrelated diseases increase with age except for a secondary peak at 30-34 years for women reflecting maternity resource use. The methodology described allows health and public health economic modellers to estimate comparable English healthcare costs for multiple diseases. This facilitates the direct comparison of different health and public health interventions enabling better decision making.
The rod-driven a-wave of the dark-adapted mammalian electroretinogram.
Robson, John G; Frishman, Laura J
2014-03-01
The a-wave of the electroretinogram (ERG) reflects the response of photoreceptors to light, but what determines the exact waveform of the recorded voltage is not entirely understood. We have now simulated the trans-retinal voltage generated by the photocurrent of dark-adapted mammalian rods, using an electrical model based on the in vitro measurements of Hagins et al. (1970) and Arden (1976) in rat retinas. Our simulations indicate that in addition to the voltage produced by extracellular flow of photocurrent from rod outer to inner segments, a substantial fraction of the recorded a-wave is generated by current that flows in the outer nuclear layer (ONL) to hyperpolarize the rod axon and synaptic terminal. This current includes a transient capacitive component that contributes an initial negative "nose" to the trans-retinal voltage when the stimulus is strong. Recordings in various species of the a-wave, including the peak and initial recovery towards the baseline, are consistent with simulations showing an initial transient primarily related to capacitive currents in the ONL. Existence of these capacitive currents can explain why there is always a substantial residual transient a-wave when post-receptoral responses are pharmacologically inactivated in rodents and nonhuman primates, or severely genetically compromised in humans (e.g. complete congenital stationary night blindness) and nob mice. Our simulations and analysis of ERGs indicate that the timing of the leading edge and peak of dark-adapted a-waves evoked by strong stimuli could be used in a simple way to estimate rod sensitivity. Copyright © 2013 Elsevier Ltd. All rights reserved.
An experimental system for controlled exposure of biological samples to electrostatic discharges.
Marjanovič, Igor; Kotnik, Tadej
2013-12-01
Electrostatic discharges occur naturally as lightning strokes, and artificially in light sources and in materials processing. When an electrostatic discharge interacts with living matter, the basic physical effects can be accompanied by biophysical and biochemical phenomena, including cell excitation, electroporation, and electrofusion. To study these phenomena, we developed an experimental system that provides easy sample insertion and removal, protection from airborne particles, observability during the experiment, accurate discharge origin positioning, discharge delivery into the sample either through an electric arc with adjustable air gap width or through direct contact, and reliable electrical insulation where required. We tested the system by assessing irreversible electroporation of Escherichia coli bacteria (15 mm discharge arc, 100 A peak current, 0.1 μs zero-to-peak time, 0.2 μs peak-to-halving time), and gene electrotransfer into CHO cells (7 mm discharge arc, 14 A peak current, 0.5 μs zero-to-peak time, 1.0 μs peak-to-halving time). Exposures to natural lightning stroke can also be studied with this system, as due to radial current dissipation, the conditions achieved by a stroke at a particular distance from its entry are also achieved by an artificial discharge with electric current downscaled in magnitude, but similar in time course, correspondingly closer to its entry. © 2013.
NASA Astrophysics Data System (ADS)
Teraoka, Iwao; Yao, Haibei; Huiyi Luo, Natalie
2017-06-01
We employed a recently developed whispering gallery mode (WGM) dip sensor made of silica to obtain spectra for many resonance peaks in water and solutions of sucrose at different concentrations and thus having different refractive indices (RI). The apparent Q factor was estimated by fitting each peak profile in the busy resonance spectrum by a Lorentzian or a sum of Lorentzians. A plot of the Q factor as a function the peak height for all the peaks analyzed indicates a straight line with a negative slope as the upper limit, for each of water and the solutions. A coupling model for a resonator and a pair of fiber tapers to feed and pick up light, developed here, supports the presence of the upper limit. We also found that the round-trip attenuation of WGM was greater than the one estimated from light absorption by water, and the difference increased with the concentration of sucrose.
Hubbert's Peak -- A Physicist's View
NASA Astrophysics Data System (ADS)
McDonald, Richard
2011-04-01
Oil, as used in agriculture and transportation, is the lifeblood of modern society. It is finite in quantity and will someday be exhausted. In 1956, Hubbert proposed a theory of resource production and applied it successfully to predict peak U.S. oil production in 1970. Bartlett extended this work in publications and lectures on the finite nature of oil and its production peak and depletion. Both Hubbert and Bartlett place peak world oil production at a similar time, essentially now. Central to these analyses are estimates of total ``oil in place'' obtained from engineering studies of oil reservoirs as this quantity determines the area under the Hubbert's Peak. Knowing the production history and the total oil in place allows us to make estimates of reserves, and therefore future oil availability. We will then examine reserves data for various countries, in particular OPEC countries, and see if these data tell us anything about the future availability of oil. Finally, we will comment on synthetic oil and the possibility of carbon-neutral synthetic oil for a sustainable future.
On Lateral Viscosity Contrast in the Mantle and the Rheology of Low-Frequency Geodynamics
NASA Technical Reports Server (NTRS)
Ivins, Erik R.; Sammis, Charles G.
1995-01-01
Mantle-wide heterogeneity is largely controlled by deeply penetrating thermal convective currents. These thermal currents are likely to produce significant lateral variation in rheology, and this can profoundly influence overall material behaviour. How thermally related lateral viscosity variations impact models of glacio-isostatic and tidal deformation is largely unknown. An important step towards model improvement is to quantify, or bound, the actual viscosity variations that characterize the mantle. Simple scaling of viscosity to shear-wave velocity fluctuations yields map-views of long- wavelength viscosity variation. These give a general quantitative description and aid in estimating the depth dependence of rheological heterogeneity throughout the mantle. The upper mantle is probably characterized by two to four orders of magnitude variation (peak-to-peak). Discrepant time-scales for rebounding Holocene shorelines of Hudson Bay and southern Iceland are consistent with this characterization. Results are given in terms of a local average viscosity ratio, (Delta)eta(bar)(sub i), of volumetric concentration, phi(sub i). For the upper mantle deeper than 340 km the following reasonable limits are estimated for (delta)eta(bar) approx. equal 10(exp -2): 0.01 less than or equal to phi less than or equal to 0.15. A spectrum of ratios (Delta)eta(bar)(sub i) less than 0.1 at concentration level eta(sub i) approx. equal 10(exp -6) - 10(exp -1) in the lower mantle implies a spectrum of shorter time-scale deformational response modes for second-degree spherical harmonic deformations of the Earth. Although highly uncertain, this spectrum of spatial variation allows a purely Maxwellian viscoelastic rheology simultaneously to explain all solid tidal dispersion phenomena and long-term rebound-related mantle viscosity. Composite theory of multiphase viscoelastic media is used to demonstrate this effect.
Estimating Equivalency of Explosives Through A Thermochemical Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maienschein, J L
2002-07-08
The Cheetah thermochemical computer code provides an accurate method for estimating the TNT equivalency of any explosive, evaluated either with respect to peak pressure or the quasi-static pressure at long time in a confined volume. Cheetah calculates the detonation energy and heat of combustion for virtually any explosive (pure or formulation). Comparing the detonation energy for an explosive with that of TNT allows estimation of the TNT equivalency with respect to peak pressure, while comparison of the heat of combustion allows estimation of TNT equivalency with respect to quasi-static pressure. We discuss the methodology, present results for many explosives, andmore » show comparisons with equivalency data from other sources.« less
METHOD OF PEAK CURRENT MEASUREMENT
Baker, G.E.
1959-01-20
The measurement and recording of peak electrical currents are described, and a method for utilizing the magnetic field of the current to erase a portion of an alternating constant frequency and amplitude signal from a magnetic mediums such as a magnetic tapes is presented. A portion of the flux from the current carrying conductor is concentrated into a magnetic path of defined area on the tape. After the current has been recorded, the tape is played back. The amplitude of the signal from the portion of the tape immediately adjacent the defined flux area and the amplitude of the signal from the portion of the tape within the area are compared with the amplitude of the signal from an unerased portion of the tape to determine the percentage of signal erasure, and thereby obtain the peak value of currents flowing in the conductor.
NASA Astrophysics Data System (ADS)
Lei, J.; Geng, Y.; Liu, K.; Zhu, W.; Zheng, Z.; Hu, H.
2017-12-01
In this paper, pulsating direct current air-water plasma jet, which can increase the production of •OH and decrease the temperature, is studied. The results show that the discharge mode changes in one cycle from corona discharge with steep Trichel current pulse to glow-like discharge. It is unknown whether the different discharge modes and water ratio have an effect on the transient process of the excited O and •OH production and the mechanism of plasma propagation. So, a series of experiments are done in this paper. The results show that the changing rules of both the excited state O and the discharge current reach their two peak values synchronously. And its maximum appears at the time of the first peak current value in corona mode. However, the change of the excited state •OH is different. It increases to its maximum at the time of the second peak current value in glow-like mode. Besides, the intensified charge coupled device photographs show that the luminous intensity of the discharge zone at the first peak current value in corona mode is stronger than the second peak current value in glow-like mode. At the same time, the discharge area of the former is larger than the latter. Nevertheless, with the increase in water ratio, the discharge area change reversed. Additionally, the air plasma plume propagation depends on the gas flow. The initial propagation velocity decreases with the increase in water ratio.
NASA Astrophysics Data System (ADS)
Mailyan, B. G.; Nag, A.; Murphy, M. J.; Briggs, M. S.; Dwyer, J. R.; Cramer, E.; Stanbro, M.; Roberts, O. J.; Rassoul, H.
2017-12-01
Electric and magnetic field signals in the radio frequency range associated with Terrestrial Gamma-ray Flashes (TGFs) have become important measurements for studying this high-energy atmospheric phenomenon. These signals can be used to geolocate the source of TGFs, but they also provide insights into the TGF production mechanism, and the relationship between particle fluxes and lightning. In this study, we analyze 32 TGFs detected by the Fermi Gamma-ray Burst Monitor (GBM) occurring in 2014-2016 in conjunction with data from the U.S. National Lightning Detection Network (NLDN). We examine the characteristics of magnetic field waveforms measured by NLDN sensors for 48 pulses occurring within 5 ms of the peak-time of the gamma-ray photon flux. The -3 dB bandwidth of the NLDN sensors are from about 400 Hz to 400 KHz. For 15 (out of 32) TGFs, the associated NLDN pulse occurred almost simultaneously with (that is, within 300 μs of) the TGF. It is possible that these near-simultaneous low frequency magnetic field pulses were produced by relativistic electron beams. The median time interval between the beginning of these near-simultaneous NLDN pulses and the peak-times of the TGF flux is 38 μs. 3 out of 16 ( 19%) of these pulses had negative initial polarity. The absolute value of NLDN-estimated peak currents, which can be viewed as a quantity proportional to the peak magnetic radiation field of these pulses, ranges from 17 kA to 166 kA, with the median being 32 kA. Twelve pulses had peak currents less than 50 kA. Additionally, we will compare the characteristics of GBM-reported gamma-ray signatures of the two categories of TGFs, those with a near-simultaneous NLDN-detected pulse and those with no such pulse (but with other pulses detected by the NLDN occurring within 5 ms of the TGF). Also, one of the TGFs occurred within the coverage region of the Kennedy Space Center Lightning Mapping Array (LMA). We will examine in detail the LMA, NLDN, and NEXRAD radar data for this TGF.
Burns, Douglas A.; Smith, Martyn J.; Freehafer, Douglas A.
2015-12-31
The application uses predictions of future annual precipitation from five climate models and two future greenhouse gas emissions scenarios and provides results that are averaged over three future periods—2025 to 2049, 2050 to 2074, and 2075 to 2099. Results are presented in ensemble form as the mean, median, maximum, and minimum values among the five climate models for each greenhouse gas emissions scenario and period. These predictions of future annual precipitation are substituted into either the precipitation variable or a water balance equation for runoff to calculate potential future peak flows. This application is intended to be used only as an exploratory tool because (1) the regression equations on which the application is based have not been adequately tested outside the range of the current climate and (2) forecasting future precipitation with climate models and downscaling these results to a fine spatial resolution have a high degree of uncertainty. This report includes a discussion of the assumptions, uncertainties, and appropriate use of this exploratory application.
Simultaneous voltammetric determination of prednisone and prednisolone in human body fluids.
Goyal, Rajendra N; Bishnoi, Sunita
2009-08-15
A sensitive, rapid and reliable electrochemical method based on voltammetry at single wall carbon nanotube (SWNT) modified edge plane pyrolytic graphite electrode (EPPGE) is proposed for the simultaneous determination of prednisolone and prednisone in human body fluids and pharmaceutical preparations. The electrochemical response of both the drugs was evaluated by osteryoung square wave voltammetry (OSWV) in phosphate buffer medium of pH 7.2. The modified electrode exhibited good electrocatalytic properties towards prednisone and prednisolone reduction with a peak potential of approximately -1230 and approximately -1332 mV respectively. The concentration versus peak current plots were linear for both the analytes in the range 0.01-100 microM and the detection limit (3 sigma/slope) observed for prednisone and prednisolone were 0.45 x 10(-8), 0.90 x 10(-8)M, respectively. The results of the quantitative estimation of prednisone and prednisolone in biological fluids were also compared with HPLC and the results were in good agreement.
Short gamma-ray bursts at the dawn of the gravitational wave era
NASA Astrophysics Data System (ADS)
Ghirlanda, G.; Salafia, O. S.; Pescalli, A.; Ghisellini, G.; Salvaterra, R.; Chassande-Mottin, E.; Colpi, M.; Nappo, F.; D'Avanzo, P.; Melandri, A.; Bernardini, M. G.; Branchesi, M.; Campana, S.; Ciolfi, R.; Covino, S.; Götz, D.; Vergani, S. D.; Zennaro, M.; Tagliaferri, G.
2016-10-01
We derive the luminosity function φ(L) and redshift distribution Ψ(z) of short gamma-ray bursts (SGRBs) using all the available observer-frame constraints (I.e. peak flux, fluence, peak energy and duration distributions) of the large population of Fermi SGRBs and the rest-frame properties of a complete sample of SGRBs detected by Swift. We show that a steep φ(L) ∝ L- α with α ≥ 2.0 is excluded if the full set of constraints is considered. We implement a Markov chain Monte Carlo method to derive the φ(L) and Ψ(z) functions assuming intrinsic Ep-Liso and Ep-Eiso correlations to hold or, alternatively, that the distributions of intrinsic peak energy, luminosity, and duration are independent. To make our results independent from assumptions on the progenitor (NS-NS binary mergers or other channels) and from uncertainties on the star formation history, we assume a parametric form for the redshift distribution of the population of SGRBs. We find that a relatively flat luminosity function with slope ~0.5 below a characteristic break luminosity ~3 × 1052 erg s-1 and a redshift distribution of SGRBs peaking at z ~ 1.5-2 satisfy all our constraints. These results also hold if no Ep-Liso and Ep-Eiso correlations are assumed and they do not depend on the choice of the minimum luminosity of the SGRB population. We estimate, within ~200 Mpc (I.e. the design aLIGO range for the detection of gravitational waves produced by NS-NS merger events), that there should be 0.007-0.03 SGRBs yr-1 detectable as γ-ray events. Assuming current estimates of NS-NS merger rates and that all NS-NS mergers lead to a SGRB event, we derive a conservative estimate of the average opening angle of SGRBs ⟨ θjet ⟩ ~ 3°-6°. The luminosity function implies a prompt emission average luminosity ⟨L⟩ ~ 1.5 × 1052 erg s-1, higher by nearly two orders of magnitude than previous findings in the literature, which greatly enhances the chance of observing SGRB "orphan" afterglows. Effort should go in the direction of finding and identifying such orphan afterglows as counterparts of GW events.
Methods for estimating properties of hydrocarbons comprising asphaltenes based on their solubility
Schabron, John F.; Rovani, Jr., Joseph F.
2016-10-04
Disclosed herein is a method of estimating a property of a hydrocarbon comprising the steps of: preparing a liquid sample of a hydrocarbon, the hydrocarbon having asphaltene fractions therein; precipitating at least some of the asphaltenes of a hydrocarbon from the liquid sample with one or more precipitants in a chromatographic column; dissolving at least two of the different asphaltene fractions from the precipitated asphaltenes during a successive dissolution protocol; eluting the at least two different dissolved asphaltene fractions from the chromatographic column; monitoring the amount of the fractions eluted from the chromatographic column; using detected signals to calculate a percentage of a peak area for a first of the asphaltene fractions and a peak area for a second of the asphaltene fractions relative to the total peak areas, to determine a parameter that relates to the property of the hydrocarbon; and estimating the property of the hydrocarbon.
Accuracy of visual estimates of joint angle and angular velocity using criterion movements.
Morrison, Craig S; Knudson, Duane; Clayburn, Colby; Haywood, Philip
2005-06-01
A descriptive study to document undergraduate physical education majors' (22.8 +/- 2.4 yr. old) estimates of sagittal plane elbow angle and angular velocity of elbow flexion visually was performed. 42 subjects rated videotape replays of 30 movements organized into three speeds of movement and two criterion elbow angles. Video images of the movements were analyzed with Peak Motus to measure actual values of elbow angles and peak angular velocity. Of the subjects 85.7% had speed ratings significantly correlated with true peak elbow angular velocity in all three angular velocity conditions. Few (16.7%) subjects' ratings of elbow angle correlated significantly with actual angles. Analysis of the subjects with good ratings showed the accuracy of visual ratings was significantly related to speed, with decreasing accuracy for slower speeds of movement. The use of criterion movements did not improve the small percentage of novice observers who could accurately estimate body angles during movement.
Small Microbial Three-Electrode Cell Based Biosensor for Online Detection of Acute Water Toxicity.
Yu, Dengbin; Zhai, Junfeng; Liu, Changyu; Zhang, Xueping; Bai, Lu; Wang, Yizhe; Dong, Shaojun
2017-11-22
The monitoring of toxicity of water is very important to estimate the safety of drinking water and the level of water pollution. Herein, a small microbial three-electrode cell (M3C) biosensor filled with polystyrene particles was proposed for online monitoring of the acute water toxicity. The peak current of the biosensor related with the performance of the bioanode was regarded as the toxicity indicator, and thus the acute water toxicity could be determined in terms of inhibition ratio by comparing the peak current obtained with water sample to that obtained with nontoxic standard water. The incorporation of polystyrene particles in the electrochemical cell not only reduced the volume of the samples used, but also improved the sensitivity of the biosensor. Experimental conditions including washing time with PBS and the concentration of sodium acetate solution were optimized. The stability of the M3C biosensor under optimal conditions was also investigated. The M3C biosensor was further examined by formaldehyde at the concentration of 0.01%, 0.03%, and 0.05% (v/v), and the corresponding inhibition ratios were 14.6%, 21.6%, and 36.4%, respectively. This work provides a new insight into the development of an online toxicity detector based on M3C biosensor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abhale, Atul Prakash; Rao, K. S. R. Koteswara, E-mail: ksrkrao@physics.iisc.erent.in
2014-07-15
The nature of the signal due to light beam induced current (LBIC) at the remote contacts is verified as a lateral photovoltage for non-uniformly illuminated planar p-n junction devices; simulation and experimental results are presented. The limitations imposed by the ohmic contacts are successfully overcome by the introduction of capacitively coupled remote contacts, which yield similar results without any significant loss in the estimated material and device parameters. It is observed that the LBIC measurements introduce artefacts such as shift in peak position with increasing laser power. Simulation of LBIC signal as a function of characteristic length L{sub c} ofmore » photo-generated carriers and for different beam diameters has resulted in the observed peak shifts, thus attributed to the finite size of the beam. Further, the idea of capacitively coupled contacts has been extended to contactless measurements using pressure contacts with an oxidized aluminium electrodes. This technique avoids the contagious sample processing steps, which may introduce unintentional defects and contaminants into the material and devices under observation. Thus, we present here, the remote contact LBIC as a practically non-destructive tool in the evaluation of device parameters and welcome its use during fabrication steps.« less
Non-Gaussianity from self-ordering scalar fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, Daniel G.; Instituto de Fisica Teorica UAM/CSIC, Universidad Autonoma de Madrid, 28049 Madrid; Caldwell, Robert R.
The Universe may harbor relics of the post-inflationary epoch in the form of a network of self-ordered scalar fields. Such fossils, while consistent with current cosmological data at trace levels, may leave too weak an imprint on the cosmic microwave background and the large-scale distribution of matter to allow for direct detection. The non-Gaussian statistics of the density perturbations induced by these fields, however, permit a direct means to probe for these relics. Here we calculate the bispectrum that arises in models of self-ordered scalar fields. We find a compact analytic expression for the bispectrum, evaluate it numerically, and providemore » a simple approximation that may be useful for data analysis. The bispectrum is largest for triangles that are aligned (have edges k{sub 1{approx_equal}}2k{sub 2{approx_equal}}2k{sub 3}) as opposed to the local-model bispectrum, which peaks for squeezed triangles (k{sub 1{approx_equal}}k{sub 2}>>k{sub 3}), and the equilateral bispectrum, which peaks at k{sub 1{approx_equal}}k{sub 2{approx_equal}}k{sub 3}. We estimate that this non-Gaussianity should be detectable by the Planck satellite if the contribution from self-ordering scalar fields to primordial perturbations is near the current upper limit.« less
A nerve stimulation method to selectively recruit smaller motor-units in rat skeletal muscle.
van Bolhuis, A I; Holsheimer, J; Savelberg, H H
2001-05-30
Electrical stimulation of peripheral nerve results in a motor-unit recruitment order opposite to that attained by natural neural control, i.e. from large, fast-fatiguing to progressively smaller, fatigue-resistant motor-units. Yet animal studies involving physiological exercise protocols of low intensity and long duration require minimal fatigue. The present study sought to apply a nerve stimulation method to selectively recruit smaller motor-units in rat skeletal muscle. Two pulse generators were used, independently supplying short supramaximal cathodal stimulating pulses (0.5 ms) and long subthreshold cathodal inactivating pulses (1.5 s) to the sciatic nerve. Propagation of action potentials was selectively blocked in nerve fibres of different diameter by adjusting the strength of the inactivating current. A tensile-testing machine was used to gauge isometric muscle force of the plantaris and both heads of the gastrocnemius muscle. The order of motor-unit recruitment was estimated from twitch characteristics, i.e. peak force and relaxation time. The results showed prolonged relaxation at lower twitch peak forces as the intensity of the inactivating current increased, indicating a reduction of the number of large motor-units to force production. It is shown that the nerve stimulation method described is effective in mimicking physiological muscle control.
Observational tests for Λ(t)CDM cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pigozzo, C.; Carneiro, S.; Dantas, M.A.
2011-08-01
We investigate the observational viability of a class of cosmological models in which the vacuum energy density decays linearly with the Hubble parameter, resulting in a production of cold dark matter particles at late times. Similarly to the flat ΛCDM case, there is only one free parameter to be adjusted by the data in this class of Λ(t)CDM scenarios, namely, the matter density parameter. To perform our analysis we use three of the most recent SNe Ia compilation sets (Union2, SDSS and Constitution) along with the current measurements of distance to the BAO peaks at z = 0.2 and zmore » = 0.35 and the position of the first acoustic peak of the CMB power spectrum. We show that in terms of χ{sup 2} statistics both models provide good fits to the data and similar results. A quantitative analysis discussing the differences in parameter estimation due to SNe light-curve fitting methods (SALT2 and MLCS2k2) is studied using the current SDSS and Constitution SNe Ia compilations. A matter power spectrum analysis using the 2dFGRS is also performed, providing a very good concordance with the constraints from the SDSS and Constitution MLCS2k2 data.« less
A novel technique for fetal heart rate estimation from Doppler ultrasound signal
2011-01-01
Background The currently used fetal monitoring instrumentation that is based on Doppler ultrasound technique provides the fetal heart rate (FHR) signal with limited accuracy. It is particularly noticeable as significant decrease of clinically important feature - the variability of FHR signal. The aim of our work was to develop a novel efficient technique for processing of the ultrasound signal, which could estimate the cardiac cycle duration with accuracy comparable to a direct electrocardiography. Methods We have proposed a new technique which provides the true beat-to-beat values of the FHR signal through multiple measurement of a given cardiac cycle in the ultrasound signal. The method consists in three steps: the dynamic adjustment of autocorrelation window, the adaptive autocorrelation peak detection and determination of beat-to-beat intervals. The estimated fetal heart rate values and calculated indices describing variability of FHR, were compared to the reference data obtained from the direct fetal electrocardiogram, as well as to another method for FHR estimation. Results The results revealed that our method increases the accuracy in comparison to currently used fetal monitoring instrumentation, and thus enables to calculate reliable parameters describing the variability of FHR. Relating these results to the other method for FHR estimation we showed that in our approach a much lower number of measured cardiac cycles was rejected as being invalid. Conclusions The proposed method for fetal heart rate determination on a beat-to-beat basis offers a high accuracy of the heart interval measurement enabling reliable quantitative assessment of the FHR variability, at the same time reducing the number of invalid cardiac cycle measurements. PMID:21999764
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod
2009-09-15
Measurement of strain, curvature, and twist of a deformed object play an important role in deformation analysis. Strain depends on the first order displacement derivative, whereas curvature and twist are determined by second order displacement derivatives. This paper proposes a pseudo-Wigner-Ville distribution based method for measurement of strain, curvature, and twist in digital holographic interferometry where the object deformation or displacement is encoded as interference phase. In the proposed method, the phase derivative is estimated by peak detection of pseudo-Wigner-Ville distribution evaluated along each row/column of the reconstructed interference field. A complex exponential signal with unit amplitude and the phasemore » derivative estimate as the argument is then generated and the pseudo-Wigner-Ville distribution along each row/column of this signal is evaluated. The curvature is estimated by using peak tracking strategy for the new distribution. For estimation of twist, the pseudo-Wigner-Ville distribution is evaluated along each column/row (i.e., in alternate direction with respect to the previous one) for the generated complex exponential signal and the corresponding peak detection gives the twist estimate.« less
Oberg, Kevin A.; Mades, Dean M.
1987-01-01
Four techniques for estimating generalized skew in Illinois were evaluated: (1) a generalized skew map of the US; (2) an isoline map; (3) a prediction equation; and (4) a regional-mean skew. Peak-flow records at 730 gaging stations having 10 or more annual peaks were selected for computing station skews. Station skew values ranged from -3.55 to 2.95, with a mean of -0.11. Frequency curves computed for 30 gaging stations in Illinois using the variations of the regional-mean skew technique are similar to frequency curves computed using a skew map developed by the US Water Resources Council (WRC). Estimates of the 50-, 100-, and 500-yr floods computed for 29 of these gaging stations using the regional-mean skew techniques are within the 50% confidence limits of frequency curves computed using the WRC skew map. Although the three variations of the regional-mean skew technique were slightly more accurate than the WRC map, there is no appreciable difference between flood estimates computed using the variations of the regional-mean technique and flood estimates computed using the WRC skew map. (Peters-PTT)
Kohn, Michael S.; Stevens, Michael R.; Mommandi, Amanullah; Khan, Aziz R.
2017-12-14
The U.S. Geological Survey (USGS), in cooperation with the Colorado Department of Transportation, determined the peak discharge, annual exceedance probability (flood frequency), and peak stage of two floods that took place on Big Cottonwood Creek at U.S. Highway 50 near Coaldale, Colorado (hereafter referred to as “Big Cottonwood Creek site”), on August 23, 2016, and on Fountain Creek below U.S. Highway 24 in Colorado Springs, Colorado (hereafter referred to as “Fountain Creek site”), on August 29, 2016. A one-dimensional hydraulic model was used to estimate the peak discharge. To define the flood frequency of each flood, peak-streamflow regional-regression equations or statistical analyses of USGS streamgage records were used to estimate annual exceedance probability of the peak discharge. A survey of the high-water mark profile was used to determine the peak stage, and the limitations and accuracy of each component also are presented in this report. Collection and computation of flood data, such as peak discharge, annual exceedance probability, and peak stage at structures critical to Colorado’s infrastructure are an important addition to the flood data collected annually by the USGS.The peak discharge of the August 23, 2016, flood at the Big Cottonwood Creek site was 917 cubic feet per second (ft3/s) with a measurement quality of poor (uncertainty plus or minus 25 percent or greater). The peak discharge of the August 29, 2016, flood at the Fountain Creek site was 5,970 ft3/s with a measurement quality of poor (uncertainty plus or minus 25 percent or greater).The August 23, 2016, flood at the Big Cottonwood Creek site had an annual exceedance probability of less than 0.01 (return period greater than the 100-year flood) and had an annual exceedance probability of greater than 0.005 (return period less than the 200-year flood). The August 23, 2016, flood event was caused by a precipitation event having an annual exceedance probability of 1.0 (return period of 1 year, or the 1-year storm), which is a statistically common (high probability) storm. The Big Cottonwood Creek site is downstream from the Hayden Pass Fire burn area, which dramatically altered the hydrology of the watershed and caused this statistically rare (low probability) flood from a statistically common (high probability) storm. The peak flood stage at the cross section closest to the U.S. Highway 50 culvert was 6,438.32 feet (ft) above the North American Datum of 1988 (NAVD 88).The August 29, 2016, flood at the Fountain Creek site had an estimated annual exceedance probability of 0.5505 (return period equal to the 1.8-year flood). The August 29, 2016, flood event was caused by a precipitation event having an annual exceedance probability of 1.0 (return period of 1 year, or the 1-year storm). The peak stage during this flood at the cross section closest to the U.S. Highway 24 bridge was 5,832.89 ft (NAVD 88).Slope-area indirect discharge measurements were carried out at the Big Cottonwood Creek and Fountain Creek sites to estimate peak discharge of the August 23, 2016, flood and August 29, 2016, flood, respectively. The USGS computer program Slope-Area Computation Graphical User Interface was used to compute the peak discharge by adding the surveyed cross sections with Manning roughness coefficient assignments to the high-water marks. The Manning roughness coefficients for each cross section were estimated in the field using the Cowan method.
Imaizumi, Yoshitaka; Suzuki, Noriyuki; Shiraishi, Fujio; Nakajima, Daisuke; Serizawa, Shigeko; Sakurai, Takeo; Shiraishi, Hiroaki
2018-01-24
In pesticide risk management in Japan, predicted environmental concentrations are estimated by a tiered approach, and the Ministry of the Environment also performs field surveys to confirm the maximum concentrations of pesticides with risk concerns. To contribute to more efficient and effective field surveys, we developed the Pesticide Chemicals High Resolution Estimation Method (PeCHREM) for estimating spatially and temporally variable emissions of various paddy herbicides from paddy fields to the environment. We used PeCHREM and the G-CIEMS multimedia environmental fate model to predict day-to-day environmental concentration changes of 25 herbicides throughout Japan. To validate the PeCHREM/G-CIEMS model, we also conducted a field survey, in which river waters were sampled at least once every two weeks at seven sites in six prefectures from April to July 2009. In 20 of 139 sampling site-herbicide combinations in which herbicides were detected in at least three samples, all observed concentrations differed from the corresponding prediction by less than one order of magnitude. We also compared peak concentrations and the dates on which the concentrations reached peak values (peak dates) between predictions and observations. The peak concentration differences between predictions and observations were less than one order of magnitude in 66% of the 166 sampling site-herbicide combinations in which herbicide was detected in river water. The observed and predicted peak dates differed by less than two weeks in 79% of these 166 combinations. These results confirm that the PeCHREM/G-CIEMS model can improve the efficiency and effectiveness of surveys by predicting the peak concentrations and peak dates of various herbicides.
NASA Astrophysics Data System (ADS)
Aalstad, Kristoffer; Westermann, Sebastian; Vikhamar Schuler, Thomas; Boike, Julia; Bertino, Laurent
2018-01-01
With its high albedo, low thermal conductivity and large water storing capacity, snow strongly modulates the surface energy and water balance, which makes it a critical factor in mid- to high-latitude and mountain environments. However, estimating the snow water equivalent (SWE) is challenging in remote-sensing applications already at medium spatial resolutions of 1 km. We present an ensemble-based data assimilation framework that estimates the peak subgrid SWE distribution (SSD) at the 1 km scale by assimilating fractional snow-covered area (fSCA) satellite retrievals in a simple snow model forced by downscaled reanalysis data. The basic idea is to relate the timing of the snow cover depletion (accessible from satellite products) to the peak SSD. Peak subgrid SWE is assumed to be lognormally distributed, which can be translated to a modeled time series of fSCA through the snow model. Assimilation of satellite-derived fSCA facilitates the estimation of the peak SSD, while taking into account uncertainties in both the model and the assimilated data sets. As an extension to previous studies, our method makes use of the novel (to snow data assimilation) ensemble smoother with multiple data assimilation (ES-MDA) scheme combined with analytical Gaussian anamorphosis to assimilate time series of Moderate Resolution Imaging Spectroradiometer (MODIS) and Sentinel-2 fSCA retrievals. The scheme is applied to Arctic sites near Ny-Ålesund (79° N, Svalbard, Norway) where field measurements of fSCA and SWE distributions are available. The method is able to successfully recover accurate estimates of peak SSD on most of the occasions considered. Through the ES-MDA assimilation, the root-mean-square error (RMSE) for the fSCA, peak mean SWE and peak subgrid coefficient of variation is improved by around 75, 60 and 20 %, respectively, when compared to the prior, yielding RMSEs of 0.01, 0.09 m water equivalent (w.e.) and 0.13, respectively. The ES-MDA either outperforms or at least nearly matches the performance of other ensemble-based batch smoother schemes with regards to various evaluation metrics. Given the modularity of the method, it could prove valuable for a range of satellite-era hydrometeorological reanalyses.
Banana orchard inventory using IRS LISS sensors
NASA Astrophysics Data System (ADS)
Nishant, Nilay; Upadhayay, Gargi; Vyas, S. P.; Manjunath, K. R.
2016-04-01
Banana is one of the major crops of India with increasing export potential. It is important to estimate the production and acreage of the crop. Thus, the present study was carried out to evolve a suitable methodology for estimating banana acreage. Area estimation methodology was devised around the fact that unlike other crops, the time of plantation of banana is different for different farmers as per their local practices or conditions. Thus in order to capture the peak signatures, biowindow of 6 months was considered, its NDVI pattern studied and the optimum two months were considered when banana could be distinguished from other competing crops. The final area of banana for the particular growing cycle was computed by integrating the areas of these two months using LISS III data with spatial resolution of 23m. Estimated banana acreage in the three districts were 11857Ha, 15202ha and 11373Ha for Bharuch, Anand and Vadodara respectively with corresponding accuracy of 91.8%, 90% and 88.16%. Study further compared the use of LISS IV data of 5.8m spatial resolution for estimation of banana using object based as well as per-pixel classification and the results were compared with statistical reports for both the approaches. In the current paper we depict the various methodologies to accurately estimate the banana acreage.
Guay, J.R.
1996-01-01
Urban areas in Perris Valley, California, have more than tripled during the last 20 years. To quantify the effects of increased urbanization on storm runoff volumes and peak discharges, rainfall-runoff models of the basin were developed to simulate runoff for 1970-75 and 1990-93 conditions. Hourly rainfall data for 1949-93 were used with the rainfall-runoff models to simulate a long-term record of storm runoff. The hydrologic effects of increased urbanization from 1970-75 to 1990-93 were analyzed by comparing the simulated annual peak discharges and volumes, and storm runoff peaks, frequency of annual peak discharges and runoff volumes, and duration of storm peak discharges for each study period. A Log-Pearson Type-III frequency analysis was calculated using the simulated annual peaks to estimate the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals. The estimated 2-year discharge at the outlet of the basin was 646 cubic feet per second for the 1970-75 conditions and 1,328 cubic feet per second for the 1990-93 conditions. The 100-year discharge at the outlet of the basin was about 14,000 cubic feet per second for the 1970-75 and 1990-93 conditions. The station duration analysis used 925 model-simulated storm peaks from each basin to estimate the percent chance a peak discharge is exceeded. At the outlet of the basin, the chances of exceeding 100 cubic feet per second were about 33 percent under 1970-75 conditions and about 59 percent under 1990-93 conditions. The chance of exceeding 2,500 cubic feet per second at the outlet of the basin was less than 1 percent higher under the 1990-93 conditions than under the 1970-75 conditions. The increase in urbanization from the early 1970's to the early 1990's more than doubled the peak discharges with a 2-year return period. However, peak discharges with return periods greater than 50 years were not significantly affected by the change in urbanization.
Pu and 137Cs in the Yangtze River estuary sediments: distribution and source identification.
Liu, Zhiyong; Zheng, Jian; Pan, Shaoming; Dong, Wei; Yamada, Masatoshi; Aono, Tatsuo; Guo, Qiuju
2011-03-01
Pu isotopes and (137)Cs were analyzed using sector field ICP-MS and γ spectrometry, respectively, in surface sediment and core sediment samples from the Yangtze River estuary. (239+240)Pu activity and (240)Pu/(239)Pu atom ratios (>0.18) shows a generally increasing trend from land to sea and from north to south in the estuary. This spatial distribution pattern indicates that the Pacific Proving Grounds (PPG) source Pu transported by ocean currents was intensively scavenged into the suspended sediment under favorable conditions, and mixed with riverine sediment as the water circulated in the estuary. This process is the main control for the distribution of Pu in the estuary. Moreover, Pu is also an important indicator for monitoring the changes of environmental radioactivity in the estuary as the river basin is currently the site of extensive human activities and the sea level is rising because of global climate changes. For core sediment samples the maximum peak of (239+240)Pu activity was observed at a depth of 172 cm. The sedimentation rate was estimated on the basis of the Pu maximum deposition peak in 1963-1964 to be 4.1 cm/a. The contributions of the PPG close-in fallout Pu (44%) and the riverine Pu (45%) in Yangtze River estuary sediments are equally important for the total Pu deposition in the estuary, which challenges the current hypothesis that the riverine Pu input was the major source of Pu budget in this area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Danny S; Sherwin, John R; Raustad, Richard
2014-04-10
The Florida Solar Energy Center (FSEC) conducted a research project to improve the best residential air conditioner condenser technology currently available on the market by retrofitting a commercially-available unit with both a high efficiency fan system and an evaporative pre-cooler. The objective was to integrate these two concepts to achieve an ultra-efficient residential air conditioner design. The project produced a working prototype that was 30% more efficient compared to the best currently-available technologies; the peak the energy efficiency ratio (EER) was improved by 41%. Efficiency at the Air-Conditioning and Refrigeration Institute (ARI) standard B-condition which is used to estimate seasonalmore » energy efficiency ratio (SEER), was raised from a nominal 21 Btu/Wh to 32 Btu/Wh.« less
Big data prediction of durations for online collective actions based on peak's timing
NASA Astrophysics Data System (ADS)
Nie, Shizhao; Wang, Zheng; Pujia, Wangmo; Nie, Yuan; Lu, Peng
2018-02-01
Peak Model states that each collective action has a life circle, which contains four periods of "prepare", "outbreak", "peak", and "vanish"; and the peak determines the max energy and the whole process. The peak model's re-simulation indicates that there seems to be a stable ratio between the peak's timing (TP) and the total span (T) or duration of collective actions, which needs further validations through empirical data of collective actions. Therefore, the daily big data of online collective actions is applied to validate the model; and the key is to check the ratio between peak's timing and the total span. The big data is obtained from online data recording & mining of websites. It is verified by the empirical big data that there is a stable ratio between TP and T; furthermore, it seems to be normally distributed. This rule holds for both the general cases and the sub-types of collective actions. Given the distribution of the ratio, estimated probability density function can be obtained, and therefore the span can be predicted via the peak's timing. Under the scenario of big data, the instant span (how long the collective action lasts or when it ends) will be monitored and predicted in real-time. With denser data (Big Data), the estimation of the ratio's distribution gets more robust, and the prediction of collective actions' spans or durations will be more accurate.
NASA Astrophysics Data System (ADS)
Cetiner, S. O.; Stoltz, P.; Messmer, P.; Cambier, J.-L.
2008-01-01
The prebreakdown and breakdown phases of a pseudospark discharge are investigated using the two-dimensional kinetic plasma simulation code OOPIC™ PRO. Trends in the peak electron current at the anode are presented as function of the hollow cathode dimensions and mean seed injection velocities at the cavity back wall. The plasma generation process by ionizing collisions is examined, showing the effect on supplying the electrons that determine the density of the beam. The mean seed velocities used here are varied between the velocity corresponding to the energy of peak ionization cross section, 15 times this value and no mean velocity (i.e., electrons injected with a temperature of 2.5eV). The reliance of the discharge characteristics on the penetrating electric field is shown to decrease as the mean seed injection velocity increases because of its ability to generate a surplus plasma independent of the virtual anode. As a result, the peak current increases with the hollow cathode dimensions for the largest average injection velocity, while for the smallest value it increases with the area of penetration of the electric field in the hollow cathode interior. Additionally, for a given geometry an increase in the peak current with the surplus plasma generated is observed. For the largest seed injection velocity used a dependence of the magnitude of the peak current on the ratio of the hole thickness and hollow cathode depth to the hole height is demonstrated. This means similar trends of the peak current are generated when the geometry is resized. Although the present study uses argon only, the variation in the discharge dependencies with the seed injection energy relative to the ionization threshold is expected to apply independently of the gas type. Secondary electrons due to electron and ion impact are shown to be important only for the largest impact areas and discharge development times of the study.
Floods of March 1978, in the Maumee River basin, northeastern Indiana
Hoggatt, Richard Earl
1981-01-01
Floods in the Maumee River basin in northeastern Indiana in March 1978 resulted in heavy damage in Fort Wayne and surrounding areas. Flood damage in Fort Wayne was estimated by the Mayor to be 11 million dollars. Approximately 15 percent of the city was inundated, and 2,400 of its 190,000 residents were forced to leave their homes. The estimate of damage in Adams and Allen Counties by Civil Defense officials was 44 million dollars. The Maumee River at New Haven exceeded the peak stage of record, 21.4 feet, by 2.2 feet. The peak discharge at this stream-gaging station, 22,400 cubic feet per second, was about equal to that of a 75-year flood. Recurrence intervals of peak flows on streams tributary to the Maumee River ranged from 5 to 50 years. Records of peak and daily discharges and some precipitation data are given in this report.
Flood of June 11, 2010, in the Upper Little Missouri River watershed, Arkansas
Holmes, Robert R.; Wagner, Daniel M.
2011-01-01
Catastrophic flash flooding occurred in the early morning hours of June 11, 2010, in the upper Little Missouri River and tributary streams in southwest Arkansas. The flooding, which resulted in 20 fatalities and substantial property damage, was caused by as much as 4.7 inches of rain falling in the upper Little Missouri River watershed in 3 hours. The 4.7 inches of rain in 3 hours corresponds to estimated annual exceedance probability of approximately 2 percent for a 3-hour duration storm. The maximum total estimated rainfall in the upper Missouri River watershed was 5.3 inches in 6 hours. Peak streamflows and other hydraulic properties were determined at five ungaged locations and one gaged location in the upper Little Missouri River watershed.The peak streamflow for the Little Missouri River at Albert Pike, Arkansas was 40,100 cubic feet per second, estimated to have occurred between 4:00 AM and 4:30 AM the morning of June 11, 2010. The peak streamflow resulted in average water depths in the nearby floodplain (Area C of the Albert Pike Campground) of 7 feet flowing at velocities potentially as great as 11 feet per second. Peak streamflow 9.1 miles downstream on the Little Missouri at the U.S. Geological Survey streamgage near Langley, Arkansas was 70,800 cubic feet per second, which corresponds to an estimated annual exceedance probability of less than 1 percent.
NASA Astrophysics Data System (ADS)
Solanki, Rekha Garg; Rajaram, Poolla; Bajpai, P. K.
2018-05-01
This work is based on the growth, characterization and estimation of lattice strain and crystallite size in CdS nanoparticles by X-ray peak profile analysis. The CdS nanoparticles were synthesized by a non-aqueous solvothermal method and were characterized by powder X-ray diffraction (XRD), transmission electron microscopy (TEM), Raman and UV-visible spectroscopy. XRD confirms that the CdS nanoparticles have the hexagonal structure. The Williamson-Hall (W-H) method was used to study the X-ray peak profile analysis. The strain-size plot (SSP) was used to study the individual contributions of crystallite size and lattice strain from the X-rays peaks. The physical parameters such as strain, stress and energy density values were calculated using various models namely, isotropic strain model, anisotropic strain model and uniform deformation energy density model. The particle size was estimated from the TEM images to be in the range of 20-40 nm. The Raman spectrum shows the characteristic optical 1LO and 2LO vibrational modes of CdS. UV-visible absorption studies show that the band gap of the CdS nanoparticles is 2.48 eV. The results show that the crystallite size estimated from Scherrer's formula, W-H plots, SSP and the particle size calculated by TEM images are approximately similar.
Agreement between VO[subscript 2peak] Predicted from PACER and One-Mile Run Time-Equated Laps
ERIC Educational Resources Information Center
Saint-Maurice, Pedro F.; Anderson, Katelin; Bai, Yang; Welk, Gregory J.
2016-01-01
Purpose: This study examined the agreement between estimated peak oxygen consumption (VO[subscript 2peak]) obtained from the Progressive Aerobic Cardiovascular Endurance Run (PACER) fitness test and equated PACER laps derived from One-Mile Run time (MR). Methods: A sample of 680 participants (324 boys and 356 girls) in Grades 7 through 12…
Lombard, Pamela J.; Bent, Gardner C.
2015-01-01
The availability of the flood-inundation maps, combined with information regarding current (near real-time) stage from USGS streamgage Hoosic River near Williamstown, and forecasted flood stages from the National Weather Service Advanced Hydrologic Prediction Service will provide emergency management personnel and residents with information that is critical for flood response activities such as evacuations and road closures, and post-flood recovery efforts. The flood-inundation maps are nonregulatory, but provide Federal, State, and local agencies and the public with estimates of the potential extent of flooding during selected peak-flow events.
Lombard, Pamela J.; Bent, Gardner C.
2015-09-02
The availability of the flood-inundation maps at http://water.usgs.gov/osw/flood_inundation/, combined with information regarding current (near real-time) stage from the two U.S. Geological Survey streamgages in the study reach, can provide emergency management personnel and residents with information to aid in flood response activities, such as evacuations and road closures, and with postflood recovery efforts. The flood-inundation maps are nonregulatory, but provide Federal, State, and local agencies and the public with estimates of the potential extent of flooding during selected peak-flow events.
Timonen, Hilkka; Cubison, Mike; Aurela, Minna; ...
2016-07-25
The applicability, methods and limitations of constrained peak fitting on mass spectra of low mass resolving power ( m/Δ m 50~500) recorded with a time-of-flight aerosol chemical speciation monitor (ToF-ACSM) are explored. Calibration measurements as well as ambient data are used to exemplify the methods that should be applied to maximise data quality and assess confidence in peak-fitting results. Sensitivity analyses and basic peak fit metrics such as normalised ion separation are employed to demonstrate which peak-fitting analyses commonly performed in high-resolution aerosol mass spectrometry are appropriate to perform on spectra of this resolving power. Information on aerosol sulfate, nitrate,more » sodium chloride, methanesulfonic acid as well as semi-volatile metal species retrieved from these methods is evaluated. The constants in a commonly used formula for the estimation of the mass concentration of hydrocarbon-like organic aerosol may be refined based on peak-fitting results. Lastly, application of a recently published parameterisation for the estimation of carbon oxidation state to ToF-ACSM spectra is validated for a range of organic standards and its use demonstrated for ambient urban data.« less
Global Oil: Relax, the End Is Not Near
NASA Astrophysics Data System (ADS)
Fisher, W. L.
2004-12-01
Global oil production will peak within the next 25 to 30 years, but peaking will be a function of demand, not supply, as the methane economy comes into full play. Analysts who predict near-term peaking of global oil production generally use some variant of Hubbert's symmetrical life cycle method. The amount of ultimately recoverable oil is assumed to be known and peaking will occur when half that amount is exhausted. In reality, ultimate recovery volumes are not known, but estimated, and estimates vary by a factor of two; production profiles are not necessarily symmetrical. Further assumptions are that the resource base is inelastic, not significantly expandable through technology and new concepts. The historical experience is quite to the contrary. Projections of near-term peaking ignore or discount field reserve growth, the most dynamic element in reserve additions of the past 25 years and one with the future potential equal to that of new field discovery. Future global demand for oil will likely amount to between 1.5 to 2.0 trillion barrels, well within the more realistic estimates of global recovery volumes. The real challenge is not oil, but natural gas where future global demand will likely exceed 25,000 trillion cubic feet. But it too will be met in sufficient volumes to bring us into the hydrogen economy some 50 to 60 years from now.
Measured close lightning leader-step electric-field-derivative waveforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Doug M.; Hill, Dustin; Biagi, Christopher J.
2010-12-01
We characterize the measured electric field-derivative (dE/dt) waveforms of lightning stepped-leader steps from three negative lightning flashes at distances of tens to hundreds of meters. Electromagnetic signatures of leader steps at such close distances have rarely been documented in previous literature. Individual leader-step three-dimensional locations are determined by a dE/dt TOA system. The leader-step field derivative is typically a bipolar pulse with a sharp initial half-cycle of the same polarity as that of the return stroke, followed by an opposite polarity overshoot that decays relatively slowly to background level. This overshoot increases in amplitude relative to the initial peak andmore » becomes dominant as range decreases. The initial peak is often preceded by a 'slow front,' similar to the slow front that precedes the fast transition to peak in first return stroke dE/dt and E waveforms. The overall step-field waveform duration is typically less than 1 {micro}s. The mean initial peak of dE/dt, range-normalized to 100 km, is 7.4 V m{sup -1} {micro}s{sup -1} (standard deviation (S.D.), 3.7 V m{sup -1} {micro}s{sup -1}, N = 103), the mean half-peak width is 33.5 ns (S.D., 11.9 ns, N = 69), and the mean 10-to-90% risetime is 43.6 ns (S.D., 24.2 ns, N = 69). From modeling, we determine the properties of the leader step currents which produced two typical measured field derivatives, and we use one of these currents to calculate predicted leader step E and dE/dt as a function of source range and height, the results being in good agreement with our observations. The two modeled current waveforms had maximum rates of current rise-to-peak near 100 kA {micro}s{sup -1}, peak currents in the 5-7 kA range, current half-peak widths of about 300 ns, and charge transfers of {approx}3 mC. As part of the modeling, those currents were propagated upward at 1.5 x 10{sup 8} m s{sup -1}, with their amplitudes decaying exponentially with a decay height constant of 25 m.« less
NASA Astrophysics Data System (ADS)
Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.
2016-01-01
Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.
Ghost Images in Helioseismic Holography? Toy Models in a Uniform Medium
NASA Astrophysics Data System (ADS)
Yang, Dan
2018-02-01
Helioseismic holography is a powerful technique used to probe the solar interior based on estimations of the 3D wavefield. The Porter-Bojarski holography, which is a well-established method used in acoustics to recover sources and scatterers in 3D, is also an estimation of the wavefield, and hence it has the potential of being applied to helioseismology. Here we present a proof-of-concept study, where we compare helioseismic holography and Porter-Bojarski holography under the assumption that the waves propagate in a homogeneous medium. We consider the problem of locating a point source of wave excitation inside a sphere. Under these assumptions, we find that the two imaging methods have the same capability of locating the source, with the exception that helioseismic holography suffers from "ghost images" ( i.e. artificial peaks away from the source location). We conclude that Porter-Bojarski holography may improve the method currently used in helioseismology.
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate origins of downward propagating leaders and a lognormal distribution to generate returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for 10,000 years with an assumed ground flash density and peak current distributions, and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
Tortorelli, R.L.
1996-01-01
The flash flood in southwestern Oklahoma City, Oklahoma, May 8, 1993, was the result of an intense 3-hour rainfall on saturated ground or impervious surfaces. The total precipitation of 5.28 inches was close to the 3-hour, 100-year frequency and produced extensive flooding. The most serious flooding was on Twin, Brock, and Lightning Creeks. Four people died in this flood. Over 1,900 structures were damaged along the 3 creeks. There were about $3 million in damages to Oklahoma City public facilities, the majority of which were in the three basins. A study was conducted to determine the magnitude of the May 8, 1993, flood peak discharge in these three creeks in southwestern Oklahoma City and compare these peaks with published flood estimates. Flood peak-discharge estimates for these creeks were determined at 11 study sites using a step-backwater analysis to match the flood water-surface profiles defined by high-water marks. The unit discharges during peak runoff ranged from 881 cubic feet per second per square mile for Lightning Creek at SW 44th Street to 3,570 cubic feet per second per square mile for Brock Creek at SW 59th Street. The ratios of the 1993 flood peak discharges to the Federal Emergency Management Agency 100-year flood peak discharges ranged from 1.25 to 3.29. The water-surface elevations ranged from 0.2 foot to 5.9 feet above the Federal Emergency Management Agency 500-year flood water-surface elevations. The very large flood peaks in these 3 small urban basins were the result of very intense rainfall in a short period of time, close to 100 percent runoff due to ground surfaces being essentially impervious, and the city streets acting as efficient conveyances to the main channels. The unit discharges compare in magnitude to other extraordinary Oklahoma urban floods.
Estimation of magnitude and frequency of floods for streams in Puerto Rico : new empirical models
Ramos-Gines, Orlando
1999-01-01
Flood-peak discharges and frequencies are presented for 57 gaged sites in Puerto Rico for recurrence intervals ranging from 2 to 500 years. The log-Pearson Type III distribution, the methodology recommended by the United States Interagency Committee on Water Data, was used to determine the magnitude and frequency of floods at the gaged sites having 10 to 43 years of record. A technique is presented for estimating flood-peak discharges at recurrence intervals ranging from 2 to 500 years for unregulated streams in Puerto Rico with contributing drainage areas ranging from 0.83 to 208 square miles. Loglinear multiple regression analyses, using climatic and basin characteristics and peak-discharge data from the 57 gaged sites, were used to construct regression equations to transfer the magnitude and frequency information from gaged to ungaged sites. The equations have contributing drainage area, depth-to-rock, and mean annual rainfall as the basin and climatic characteristics in estimating flood peak discharges. Examples are given to show a step-by-step procedure in calculating a 100-year flood at a gaged site, an ungaged site, a site near a gaged location, and a site between two gaged sites.
NASA Astrophysics Data System (ADS)
Admire, A. R.; Dengler, L.; Crawford, G. B.; uslu, B. U.; Montoya, J.
2012-12-01
A pilot project was initiated in 2009 in Humboldt Bay, about 370 kilometers (km) north of San Francisco, California, to measure the currents produced by tsunamis. Northern California is susceptible to both near- and far-field tsunamis and has a historic record of damaging events. Crescent City Harbor, located approximately 100 km north of Humboldt Bay, suffered US 20 million in damages from strong currents produced by the 2006 Kuril Islands tsunami and an additional US 20 million from the 2011 Japan tsunami. In order to better evaluate these currents in northern California, we deployed a Nortek Aquadopp 600kHz 2D Acoustic Doppler Current Profiler (ADCP) with a one-minute sampling interval in Humboldt Bay, near the existing National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (NOS) tide gauge station. The instrument recorded the tsunamis produced by the Mw 8.8 Chile earthquake on February 27, 2010 and the Mw 9.0 Japan earthquake on March 11, 2011. Currents from the 2010 tsunami persisted in Humboldt Bay for at least 30 hours with peak amplitudes of about 0.3 meters per second (m/s). The 2011 tsunami signal lasted for over 86 hours with peak amplitude of 0.95 m/s. Strongest currents corresponded to the maximum change in water level as recorded on the NOAA NOS tide gauge, and occurred 90 minutes after the initial wave arrival. No damage was observed in Humboldt Bay for either event. In Crescent City, currents for the first three and a half hours of the 2011 Japan tsunami were estimated using security camera video footage from the Harbor Master building across from the entrance to the small boat basin, approximately 70 meters away from the NOAA NOS tide gauge station. The largest amplitude tide gauge water-level oscillations and most of the damage occurred within this time window. The currents reached a velocity of approximately 4.5 m/s and six cycles exceeded 3 m/s during this period. Measured current velocities both in Humboldt Bay and in Crescent City were compared to calculated velocities from the Method of Splitting Tsunamis (MOST) numerical model. For Humboldt Bay, the 2010 model tsunami frequencies matched the actual values for the first two hours after the initial arrival however the amplitudes were underestimated by approximately 65%. MOST replicated the first four hours of the 2011 tsunami signal in Humboldt Bay quite well although the peak flood currents were underestimated by about 50%. MOST predicted attenuation of the signal after four hours but the actual signal persisted at a nearly constant level for more than 48 hours. In Crescent City, the model prediction of the 2011 frequency agreed quite well with the observed signal for the first two and a half hours after the initial arrival with a 50% underestimation of the peak amplitude. The results from this project demonstrate that ADCPs can effectively record tsunami currents for small to moderate events and can be used to calibrate and validate models (i.e. MOST) in order to better predict hazardous tsunami conditions and improve planned responses to protect lives and property, especially within harbors. An ADCP will be installed in Crescent City Harbor and four additional ADCPs are being deployed in Humboldt Bay during the fall of 2012.
Body Size of Male Youth Soccer Players: 1978-2015.
Malina, Robert M; Figueiredo, António J; Coelho-E-Silva, Manuel J
2017-10-01
Studies of the body size and proportions of athletes have a long history. Comparisons of athletes within specific sports across time, though not extensive, indicate both positive and negative trends. To evaluate secular variation in heights and weights of male youth soccer players reported in studies between 1978 and 2015. Reported mean ages, heights, and weights of male soccer players 9-18 years of age were extracted from the literature and grouped into two intervals: 1978-99 and 2000-15. A third-order polynomial was fitted to the mean heights and weights across the age range for each interval, while the Preece-Baines model 1 was fitted to the grand means of mean heights and mean weights within each chronological year to estimate ages at peak height velocity and peak weight velocity for each time interval. Third-order polynomials applied to all data points and estimates based on the Preece-Baines model applied to grand means for each age group provided similar fits. Both indicated secular changes in body size between the two intervals. Secular increases in height and weight between 1978-99 and 2000-15 were especially apparent between 13 and 16 years of age, but estimated ages at peak height velocity (13.01 and 12.91 years) and peak weight velocity (13.86 and 13.77 years) did not differ between the time intervals. Although the body size of youth soccer players increased between 1978-99 and 2000-15, estimated ages at peak height velocity and peak weight velocity did not change. The increase in height and weight likely reflected improved health and nutritional conditions, in addition to the selectivity of soccer reflected in systematic selection and retention of players advanced in maturity status, and exclusion of late maturing players beginning at about 12-13 years of age. Enhanced training programs aimed at the development of strength and power are probably an additional factor contributing to secular increases in body weight.
Ultrasonic tracking of shear waves using a particle filter.
Ingle, Atul N; Ma, Chi; Varghese, Tomy
2015-11-01
This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques.
NASA Astrophysics Data System (ADS)
Rune Karlsen, Stein; Anderson, Helen B.; van der Wal, René; Bremset Hansen, Brage
2018-02-01
Efforts to estimate plant productivity using satellite data can be frustrated by the presence of cloud cover. We developed a new method to overcome this problem, focussing on the high-arctic archipelago of Svalbard where extensive cloud cover during the growing season can prevent plant productivity from being estimated over large areas. We used a field-based time-series (2000-2009) of live aboveground vascular plant biomass data and a recently processed cloud-free MODIS-Normalised Difference Vegetation Index (NDVI) data set (2000-2014) to estimate, on a pixel-by-pixel basis, the onset of plant growth. We then summed NDVI values from onset of spring to the average time of peak NDVI to give an estimate of annual plant productivity. This remotely sensed productivity measure was then compared, at two different spatial scales, with the peak plant biomass field data. At both the local scale, surrounding the field data site, and the larger regional scale, our NDVI measure was found to predict plant biomass (adjusted R 2 = 0.51 and 0.44, respectively). The commonly used ‘maximum NDVI’ plant productivity index showed no relationship with plant biomass, likely due to some years having very few cloud-free images available during the peak plant growing season. Thus, we propose this new summed NDVI from onset of spring to time of peak NDVI as a proxy of large-scale plant productivity for regions such as the Arctic where climatic conditions restrict the availability of cloud-free images.
George, Jason; Abdulla, Rami Khoury; Yeow, Raymond; Aggarwal, Anshul; Boura, Judith; Wegner, James; Franklin, Barry A
2017-02-15
Our increasingly sedentary lifestyle is associated with a heightened risk of obesity, diabetes, heart disease, and cardiovascular mortality. Using the recently developed heart rate index formula in 843 patients (mean ± SD age 62.3 ± 15.7 years) who underwent 24-hour ambulatory electrocardiographic (ECG) monitoring, we estimated average and peak daily energy expenditure, expressed as metabolic equivalents (METs), and related these data to subsequent hospital encounters and health care costs. In this cohort, estimated daily average and peak METs were 1.7 ± 0.7 and 5.5 ± 2.1, respectively. Patients who achieved daily bouts of peak energy expenditure ≥5 METs had fewer hospital encounters (p = 0.006) and median health care costs that were nearly 50% lower (p <0.001) than their counterparts who attained <5 METs. In patients whose body mass index was ≥30 kg/m 2 , there were significant differences in health care costs depending on whether they achieved <5 or ≥5 METs estimated by ambulatory ECG monitoring (p = 0.005). Interestingly, patients who achieved ≥5 METs had lower and no significant difference in their health care costs, regardless of their body mass index (p = 0.46). Patients with previous percutaneous coronary intervention who achieved ≥5 METs had lower health care costs (p = 0.044) and fewer hospital encounters (p = 0.004) than those who achieved <5 METs. In conclusion, average and peak daily energy expenditures estimated from ambulatory ECG monitoring may provide useful information regarding health care utilization in patients with and without previous percutaneous coronary intervention, irrespective of body habitus. Our findings are the first to link lower intensities of peak daily energy expenditure, estimated from ambulatory ECG monitoring, with increased health care utilization. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Das, M.; Nath, P.; Sarkar, D.
2016-02-01
In this article effect of etching current density (J) on the microstructural, optical and electrical properties of photoelectrochemically prepared heterostructure is reported. Prepared samples are characterized by FESEM, XRD, UV-Visible, Raman and photoluminescence (PL) spectra and current-voltage (I-V) characteristics. FESEM shows presence of mixture of randomly distributed meso- and micro-pores. Porous layer thickness determined by cross section view of SEM is proportional to J. XRD shows crystalline nature but gradually extent of crystallinity decreases with increasing J. Raman spectra show large red-shift and asymmetric broadening with respect to crystalline silicon (c-Si). UV-visible reflectance and PL show blue shift in peaks with increasing J. The I-V characteristics are analyzed by the conventional thermionic emission (TE) model and Cheung's model to estimate the barrier height (φb), ideality factor (n) and series resistance (Rs) for comparison between the two models. The latter model is found to fit better.
Kicker field simulation and measurement for the muon g-2 experiment at FNAL
NASA Astrophysics Data System (ADS)
Chang, Seung Pyo; Kim, Young Im; Choi, Jihoon; Semertzidis, Yannis; muon g-2 experiment Collaboration
2017-01-01
In the Muon g-2 experiment, muon beam is injected to the storage ring in a slightly tilted orbit whose center is 77 mm away from the center of the ring. The kicker is needed to send the muon beam to the central orbit. The magnetic kicker is designed for the experiment and about 0.1 Tm field integral is needed. The peak current pulse is 4200 A to make this field integral. This strong kicker pulse could make unwanted eddy current occur. This eddy current could spoil the main magnetic field of the storage ring. This could be a critical threat to the precision of experiment. The kicker field simulation has done using OPERA to estimate the effects. Also the kicker field should be measured based on Faraday effect. The measurement has tested in the lab before install the experiment area. In this presentation, the simulation and measurement results will be discussed. This work was supported by IBS-R017-D1-2016-a00.
NASA Astrophysics Data System (ADS)
Wan, Xiaodong; Wang, Yuanxun; Zhao, Dawei; Huang, YongAn
2017-09-01
Our study aims at developing an effective quality monitoring system in small scale resistance spot welding of titanium alloy. The measured electrical signals were interpreted in combination with the nugget development. Features were extracted from the dynamic resistance and electrode voltage curve. A higher welding current generally indicated a lower overall dynamic resistance level. A larger electrode voltage peak and higher change rate of electrode voltage could be detected under a smaller electrode force or higher welding current condition. Variation of the extracted features and weld quality was found more sensitive to the change of welding current than electrode force. Different neural network model were proposed for weld quality prediction. The back propagation neural network was more proper in failure load estimation. The probabilistic neural network model was more appropriate to be applied in quality level classification. A real-time and on-line weld quality monitoring system may be developed by taking advantages of both methods.
NASA Astrophysics Data System (ADS)
Liu, Yang; Gao, Bo; Gong, Min; Shi, Ruiying
2017-06-01
The influence of a GaN layer as a sub-quantum well for an AlGaN/GaN/AlGaN double barrier resonant tunneling diode (RTD) on device performance has been investigated by means of numerical simulation. The introduction of the GaN layer as the sub-quantum well turns the dominant transport mechanism of RTD from the 3D-2D model to the 2D-2D model and increases the energy difference between tunneling energy levels. It can also lower the effective height of the emitter barrier. Consequently, the peak current and peak-to-valley current difference of RTD have been increased. The optimal GaN sub-quantum well parameters are found through analyzing the electrical performance, energy band, and transmission coefficient of RTD with different widths and depths of the GaN sub-quantum well. The most pronounced electrical parameters, a peak current density of 5800 KA/cm2, a peak-to-valley current difference of 1.466 A, and a peak-to-valley current ratio of 6.35, could be achieved by designing RTD with the active region structure of GaN/Al0.2Ga0.8 N/GaN/Al0.2Ga0.8 N (3 nm/1.5 nm/1.5 nm/1.5 nm).
Simurda, J; Simurdová, M; Bravený, P; Sumbera, J
1992-01-01
1. The slow inward current component related to contraction (Isic) was studied in voltage clamp experiments on canine ventricular trabeculae at 30 degrees C with the aims of (a) estimating its relation to electrogenic Na(+)-Ca2+ exchange and (b) comparing it with similar currents as reported in cardiac myocytes. 2. Isic may be recorded under conditions of augmented contractility in response to depolarizing pulses below the threshold of the classic slow inward current (presumably mediated by L-type Ca2+ channels). In responses to identical depolarizing clamp pulses the peak value of Isic is directly related to the amplitude of contraction (Fmax). Isic peaks about 60 ms after the onset of depolarization and declines with a half-time of about 110 ms. 3. The voltage threshold of Isic activation is the same as the threshold of contraction. The positive inotropic clamp preconditions shift both thresholds to more negative values of membrane voltage, i.e. below the threshold of the classic slow inward current. 4. Isic may also be recorded as a slowly decaying inwardly directed current 'tail' after depolarizing pulses. In this representation the peak value of Isic changes with duration of the depolarizing pulses, again in parallel with Fmax. In response to pulses shorter than 100 ms both variables increase with depolarization time. If initial conditions remain constant, further prolongation of the pulse does not significantly influence either one (tail currents follow a common envelope). 5. Isic differs from classic slow inward current by: (a) its direct relation to contraction, (b) the slower decay of the current tail on repolarization, (c) slower restitution corresponding to the mechanical restitution, (d) its relative insensitivity to Ca(2+)-blocking agents (the decrease of Isic is secondary to the negative inotropic of Ca(2+)-blocking agents (the decrease of Isic is secondary to the negative inotropic effect) and (e) its disappearance after Sr2+ substitution for Ca2+. 6. The manifestations of Isic in multicellular preparations do not differ significantly from those reported in isolated myocytes (in contrast to calcium current). 7. The analysis of the correlation between Isic and Fmax transients during trains of identical test depolarizing pulses at variable extra- and intracellular ionic concentrations (changes of [Ca2+]o, 50% Li+ substitution for Na+, strophanthidin) indicate that the observed effects conform to the predictions based on a quantitative model of Na(+)-Ca2+ exchange. 8. It is concluded that Isic is activated by a transient increase of [Ca2+]i, in consequence of the release from the reticular stores.(ABSTRACT TRUNCATED AT 400 WORDS) PMID:1293284
Estimation of Pharyngeal Collapsibility During Sleep by Peak Inspiratory Airflow.
Azarbarzin, Ali; Sands, Scott A; Taranto-Montemurro, Luigi; Oliveira Marques, Melania D; Genta, Pedro R; Edwards, Bradley A; Butler, James; White, David P; Wellman, Andrew
2017-01-01
Pharyngeal critical closing pressure (Pcrit) or collapsibility is a major determinant of obstructive sleep apnea (OSA) and may be used to predict the success/failure of non-continuous positive airway pressure (CPAP) therapies. Since its assessment involves overnight manipulation of CPAP, we sought to validate the peak inspiratory flow during natural sleep (without CPAP) as a simple surrogate measurement of collapsibility. Fourteen patients with OSA attended overnight polysomnography with pneumotachograph airflow. The middle third of the night (non-rapid eye movement sleep [NREM]) was dedicated to assessing Pcrit in passive and active states via abrupt and gradual CPAP pressure drops, respectively. Pcrit is the extrapolated CPAP pressure at which flow is zero. Peak and mid-inspiratory flow off CPAP was obtained from all breaths during sleep (excluding arousal) and compared with Pcrit. Active Pcrit, measured during NREM sleep, was strongly correlated with both peak and mid-inspiratory flow during NREM sleep (r = -0.71, p < .005 and r = -0.64, p < .05, respectively), indicating that active pharyngeal collapsibility can be reliably estimated from simple airflow measurements during polysomnography. However, there was no significant relationship between passive Pcrit, measured during NREM sleep, and peak or mid-inspiratory flow obtained from NREM sleep. Flow measurements during REM sleep were not significantly associated with active or passive Pcrit. Our study demonstrates the feasibility of estimating active Pcrit using flow measurements in patients with OSA. This method may enable clinicians to estimate pharyngeal collapsibility without sophisticated equipment and potentially aid in the selection of patients for non- positive airway pressure therapies. © Sleep Research Society 2016. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
Cost estimate of electricity produced by TPV
NASA Astrophysics Data System (ADS)
Palfinger, Günther; Bitnar, Bernd; Durisch, Wilhelm; Mayor, Jean-Claude; Grützmacher, Detlev; Gobrecht, Jens
2003-05-01
A crucial parameter for the market penetration of TPV is its electricity production cost. In this work a detailed cost estimate is performed for a Si photocell based TPV system, which was developed for electrically self-powered operation of a domestic heating system. The results are compared to a rough estimate of cost of electricity for a projected GaSb based system. For the calculation of the price of electricity, a lifetime of 20 years, an interest rate of 4.25% per year and maintenance costs of 1% of the investment are presumed. To determine the production cost of TPV systems with a power of 12-20 kW, the costs of the TPV components and 100 EUR kW-1el,peak for assembly and miscellaneous were estimated. Alternatively, the system cost for the GaSb system was derived from the cost of the photocells and from the assumption that they account for 35% of the total system cost. The calculation was done for four different TPV scenarios which include a Si based prototype system with existing technology (etasys = 1.0%), leading to 3000 EUR kW-1el,peak, an optimized Si based system using conventional, available technology (etasys = 1.5%), leading to 900 EUR kW-1el,peak, a further improved system with future technology (etasys = 5%), leading to 340 EUR kW-1el,peak and a GaSb based system (etasys = 12.3% with recuperator), leading to 1900 EUR kW-1el,peak. Thus, prices of electricity from 6 to 25 EURcents kWh-1el (including gas of about 3.5 EURcents kWh-1) were calculated and compared with those of fuel cells (31 EURcents kWh-1) and gas engines (23 EURcents kWh-1).
NASA Astrophysics Data System (ADS)
Fivet, V.; Quinet, P.; Bautista, M. A.
2016-01-01
Aims: Accurate and reliable atomic data for lowly ionized Fe-peak species (Sc, Ti, V, Cr, Mn, Fe, Co, and Ni) are of paramount importance for analyzing the high-resolution astrophysical spectra currently available. The third spectra of several iron group elements have been observed in different galactic sources, such as Herbig-Haro objects in the Orion Nebula and stars like Eta Carinae. However, forbidden M1 and E2 transitions between low-lying metastable levels of doubly charged iron-peak ions have been investigated very little so far, and radiative rates for those lines remain sparse or nonexistent. We attempt to fill that gap and provide transition probabilities for the most important forbidden lines of all doubly ionized iron-peak elements. Methods: We carried out a systematic study of the electronic structure of doubly ionized Fe-peak species. The magnetic dipole (M1) and electric quadrupole (E2) transition probabilities were computed using the pseudo-relativistic Hartree-Fock (HFR) code of Cowan and the central Thomas-Fermi-Dirac-Amaldi potential approximation implemented in AUTOSTRUCTURE. This multiplatform approach allowed for consistency checks and intercomparison and has proven very useful in many previous works for estimating the uncertainties affecting the radiative data. Results: We present transition probabilities for the M1 and E2 forbidden lines depopulating the metastable even levels belonging to the 3dk and 3dk-14s configurations in Sc III (k = 1), Ti III (k = 2), V III (k = 3), Cr III (k = 4), Mn III (k = 5), Fe III (k = 6), Co III (k = 7), and Ni III (k = 8).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timonen, Hilkka; Cubison, Mike; Aurela, Minna
The applicability, methods and limitations of constrained peak fitting on mass spectra of low mass resolving power ( m/Δ m 50~500) recorded with a time-of-flight aerosol chemical speciation monitor (ToF-ACSM) are explored. Calibration measurements as well as ambient data are used to exemplify the methods that should be applied to maximise data quality and assess confidence in peak-fitting results. Sensitivity analyses and basic peak fit metrics such as normalised ion separation are employed to demonstrate which peak-fitting analyses commonly performed in high-resolution aerosol mass spectrometry are appropriate to perform on spectra of this resolving power. Information on aerosol sulfate, nitrate,more » sodium chloride, methanesulfonic acid as well as semi-volatile metal species retrieved from these methods is evaluated. The constants in a commonly used formula for the estimation of the mass concentration of hydrocarbon-like organic aerosol may be refined based on peak-fitting results. Lastly, application of a recently published parameterisation for the estimation of carbon oxidation state to ToF-ACSM spectra is validated for a range of organic standards and its use demonstrated for ambient urban data.« less
Trommer, J.T.; Loper, J.E.; Hammett, K.M.
1996-01-01
Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea
A New Estimate of North American Mountain Snow Accumulation From Regional Climate Model Simulations
NASA Astrophysics Data System (ADS)
Wrzesien, Melissa L.; Durand, Michael T.; Pavelsky, Tamlin M.; Kapnick, Sarah B.; Zhang, Yu; Guo, Junyi; Shum, C. K.
2018-02-01
Despite the importance of mountain snowpack to understanding the water and energy cycles in North America's montane regions, no reliable mountain snow climatology exists for the entire continent. We present a new estimate of mountain snow water equivalent (SWE) for North America from regional climate model simulations. Climatological peak SWE in North America mountains is 1,006 km3, 2.94 times larger than previous estimates from reanalyses. By combining this mountain SWE value with the best available global product in nonmountain areas, we estimate peak North America SWE of 1,684 km3, 55% greater than previous estimates. In our simulations, the date of maximum SWE varies widely by mountain range, from early March to mid-April. Though mountains comprise 24% of the continent's land area, we estimate that they contain 60% of North American SWE. This new estimate is a suitable benchmark for continental- and global-scale water and energy budget studies.
Peak-Seeking Control Using Gradient and Hessian Estimates
NASA Technical Reports Server (NTRS)
Ryan, John J.; Speyer, Jason L.
2010-01-01
A peak-seeking control method is presented which utilizes a linear time-varying Kalman filter. Performance function coordinate and magnitude measurements are used by the Kalman filter to estimate the gradient and Hessian of the performance function. The gradient and Hessian are used to command the system toward a local extremum. The method is naturally applied to multiple-input multiple-output systems. Applications of this technique to a single-input single-output example and a two-input one-output example are presented.
Advance Technology Satellites in the Commercial Environment. Volume 2: Final Report
NASA Technical Reports Server (NTRS)
1984-01-01
A forecast of transponder requirements was obtained. Certain assumptions about system configurations are implicit in this process. The factors included are interpolation of baseline year values to produce yearly figures, estimation of satellite capture, effects of peak-hours and the time-zone staggering of peak hours, circuit requirements for acceptable grade of service capacity of satellite transponders, including various compression methods where applicable, and requirements for spare transponders in orbit. The graphical distribution of traffic requirements was estimated.
Glutathionylation-Dependence of Na+-K+-Pump Currents Can Mimic Reduced Subsarcolemmal Na+ Diffusion
Garcia, Alvaro; Liu, Chia-Chi; Cornelius, Flemming; Clarke, Ronald J.; Rasmussen, Helge H.
2016-01-01
The existence of a subsarcolemmal space with restricted diffusion for Na+ in cardiac myocytes has been inferred from a transient peak electrogenic Na+-K+ pump current beyond steady state on reexposure of myocytes to K+ after a period of exposure to K+-free extracellular solution. The transient peak current is attributed to enhanced electrogenic pumping of Na+ that accumulated in the diffusion-restricted space during pump inhibition in K+-free extracellular solution. However, there are no known physical barriers that account for such restricted Na+ diffusion, and we examined if changes of activity of the Na+-K+ pump itself cause the transient peak current. Reexposure to K+ reproduced a transient current beyond steady state in voltage-clamped ventricular myocytes as reported by others. Persistence of it when the Na+ concentration in patch pipette solutions perfusing the intracellular compartment was high and elimination of it with K+-free pipette solution could not be reconciled with restricted subsarcolemmal Na+ diffusion. The pattern of the transient current early after pump activation was dependent on transmembrane Na+- and K+ concentration gradients suggesting the currents were related to the conformational poise imposed on the pump. We examined if the currents might be accounted for by changes in glutathionylation of the β1 Na+-K+ pump subunit, a reversible oxidative modification that inhibits the pump. Susceptibility of the β1 subunit to glutathionylation depends on the conformational poise of the Na+-K+ pump, and glutathionylation with the pump stabilized in conformations equivalent to those expected to be imposed on voltage-clamped myocytes supported this hypothesis. So did elimination of the transient K+-induced peak Na+-K+ pump current when we included glutaredoxin 1 in patch pipette solutions to reverse glutathionylation. We conclude that transient K+-induced peak Na+-K+ pump current reflects the effect of conformation-dependent β1 pump subunit glutathionylation, not restricted subsarcolemmal diffusion of Na+. PMID:26958887
Garcia, Alvaro; Liu, Chia-Chi; Cornelius, Flemming; Clarke, Ronald J; Rasmussen, Helge H
2016-03-08
The existence of a subsarcolemmal space with restricted diffusion for Na(+) in cardiac myocytes has been inferred from a transient peak electrogenic Na(+)-K(+) pump current beyond steady state on reexposure of myocytes to K(+) after a period of exposure to K(+)-free extracellular solution. The transient peak current is attributed to enhanced electrogenic pumping of Na(+) that accumulated in the diffusion-restricted space during pump inhibition in K(+)-free extracellular solution. However, there are no known physical barriers that account for such restricted Na(+) diffusion, and we examined if changes of activity of the Na(+)-K(+) pump itself cause the transient peak current. Reexposure to K(+) reproduced a transient current beyond steady state in voltage-clamped ventricular myocytes as reported by others. Persistence of it when the Na(+) concentration in patch pipette solutions perfusing the intracellular compartment was high and elimination of it with K(+)-free pipette solution could not be reconciled with restricted subsarcolemmal Na(+) diffusion. The pattern of the transient current early after pump activation was dependent on transmembrane Na(+)- and K(+) concentration gradients suggesting the currents were related to the conformational poise imposed on the pump. We examined if the currents might be accounted for by changes in glutathionylation of the β1 Na(+)-K(+) pump subunit, a reversible oxidative modification that inhibits the pump. Susceptibility of the β1 subunit to glutathionylation depends on the conformational poise of the Na(+)-K(+) pump, and glutathionylation with the pump stabilized in conformations equivalent to those expected to be imposed on voltage-clamped myocytes supported this hypothesis. So did elimination of the transient K(+)-induced peak Na(+)-K(+) pump current when we included glutaredoxin 1 in patch pipette solutions to reverse glutathionylation. We conclude that transient K(+)-induced peak Na(+)-K(+) pump current reflects the effect of conformation-dependent β1 pump subunit glutathionylation, not restricted subsarcolemmal diffusion of Na(+). Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Demand Side Management: An approach to peak load smoothing
NASA Astrophysics Data System (ADS)
Gupta, Prachi
A preliminary national-level analysis was conducted to determine whether Demand Side Management (DSM) programs introduced by electric utilities since 1992 have made any progress towards their stated goal of reducing peak load demand. Estimates implied that DSM has a very small effect on peak load reduction and there is substantial regional and end-user variability. A limited scholarly literature on DSM also provides evidence in support of a positive effect of demand response programs. Yet, none of these studies examine the question of how DSM affects peak load at the micro-level by influencing end-users' response to prices. After nearly three decades of experience with DSM, controversy remains over how effective these programs have been. This dissertation considers regional analyses that explore both demand-side solutions and supply-side interventions. On the demand side, models are estimated to provide in-depth evidence of end-user consumption patterns for each North American Electric Reliability Corporation (NERC) region, helping to identify sectors in regions that have made a substantial contribution to peak load reduction. The empirical evidence supports the initial hypothesis that there is substantial regional and end-user variability of reductions in peak demand. These results are quite robust in rapidly-urbanizing regions, where air conditioning and lighting load is substantially higher, and regions where the summer peak is more pronounced than the winter peak. It is also evident from the regional experiences that active government involvement, as shaped by state regulations in the last few years, has been successful in promoting DSM programs, and perhaps for the same reason we witness an uptick in peak load reductions in the years 2008 and 2009. On the supply side, we estimate the effectiveness of DSM programs by analyzing the growth of capacity margin with the introduction of DSM programs. The results indicate that DSM has been successful in offsetting the need for additional production capacity by the means of demand response measures, but the success is limited to only a few regions. The rate of progress in the future will depend on a wide range of improved technologies and a continuous government monitoring for successful adoption of demand response programs to manage growing energy demand.
The target material influence on the current pulse during high power pulsed magnetron sputtering
NASA Astrophysics Data System (ADS)
Moens, Filip; Konstantinidis, Stéphanos; Depla, Diederik
2017-10-01
The current-time characteristic during high power pulsed magnetron sputtering is measured under identical conditions for seventeen different target materials. Based on physical processes such as gas rarefaction, ion-induced electron emission, and electron impact ionization, two test parameters were derived that significantly correlate with specific features of the current-time characteristic: i) the peak current is correlated to the momentum transfer between the sputtered material and the argon gas, ii) while the observed current plateau after the peak is connected to the metal ionization rate.
Remote sensing of the ionospheric F layer by use of O I 6300-A and O I 1356-A observations
NASA Technical Reports Server (NTRS)
Chandra, S.; Reed, E. I.; Meier, R. R.; Opal, C. B.; Hicks, G. T.
1975-01-01
The possibility of using airglow techniques for estimating the electron density and height of the F layer is studied on the basis of a simple relationship between the height of the F2 peak and the column emission rates of the O I 6300 A and O I 1356 A lines. The feasibility of this approach is confirmed by a numerical calculation of F2 peak heights and electron densities from simultaneous measurements of O I 6300 A and O I 1356 A obtained with earth-facing photometers carried by the Ogo 4 satellite. Good agreement is established with the F2 peak heights estimates from top-side and bottom-side ionospheric sounding.
Massive Statistics of VLF-Induced Ionospheric Disturbances
NASA Astrophysics Data System (ADS)
Pailoor, N.; Cohen, M.; Golkowski, M.
2017-12-01
The impact of lightning of the D-region of the ionosphere has been measured by Very Low Frequency (VLF) remote sensing, and can be seen through the observance of Early-Fast events. Previous research has indicated that several factors control the behavior and occurrence of these events, including the transmitter-receiver geometry, as well as the peak current and polarity of the strike. Unfortunately, since each event is unique due to the wide variety of impacting factors, it is difficult to make broad inferences about the interactions between the lightning and ionosphere. By investigating a large database of lightning-induced disturbances over a span of several years and over a continental-scale region, we seek to quantify the relationship between geometry, lightning parameters, and the apparent disturbance of the ionosphere as measured with VLF transmitters. We began with a set of 860,000 cases where an intense lightning stroke above 150 kA occurred within 300 km of a transmiter-receiver path. To then detect ionospheric disturbances from the large volume of VLF data and lightning incidents, we applied a number of classification methods to the actual VLF amplitude data, and find that the most accurate is a convolutional neural network, which yielded a detection efficiency of 95-98%, and a false positive rate less than 25%. Using this model, we were able to assemble a database of more than 97,000 events, with each event stored with its corresponding time, date, receiver, transmitter, and lightning parameters. Estimates for the peak and slope of each disruption were also calculated. From this data, we were able to chart the relationships between geometry and lightning parameters (peak current and polarity) towards the occurrence probability, perturbation intensity, and recovery time, of the VLF perturbation. The results of this analysis are presented here.
Cloud-to-ground lightning activity in Colombia: A 14-year study using lightning location system data
NASA Astrophysics Data System (ADS)
Herrera, J.; Younes, C.; Porras, L.
2018-05-01
This paper presents the analysis of 14 years of cloud-to-ground lightning activity observation in Colombia using lightning location systems (LLS) data. The first Colombian LLS operated from 1997 to 2001. After a few years, this system was upgraded and a new LLS has been operating since 2007. Data obtained from these two systems was analyzed in order to obtain lightning parameters used in designing lightning protection systems. The flash detection efficiency was estimated using average peak current maps and some theoretical results previously published. Lightning flash multiplicity was evaluated using a stroke grouping algorithm resulting in average values of about 1.0 and 1.6 for positive and negative flashes respectively and for both LLS. The time variation of this parameter changes slightly for the years considered in this study. The first stroke peak current for negative and positive flashes shows median values close to 29 kA and 17 kA respectively for both networks showing a great dependence on the flash detection efficiency. The average percentage of negative and positive flashes shows a 74.04% and 25.95% of occurrence respectively. The daily variation shows a peak between 23 and 02 h. The monthly variation of this parameter exhibits a bimodal behavior typical of the regions located near The Equator. The lightning flash density was obtained dividing the study area in 3 × 3 km cells and resulting in maximum average values of 25 and 35 flashes km- 2 year- 1 for each network respectively. A comparison of these results with global lightning activity hotspots was performed showing good correlation. Besides, the lightning flash density variation with altitude shows an inverse relation between these two variables.
Knott, Jayne Fifield; Olimpio, Julio C.
1986-01-01
Estimation of the average annual rate of ground-water recharge to sand and gravel aquifers using elevated tritium concentrations in ground water is an alternative to traditional steady-state and water-balance recharge-rate methods. The concept of the tritium tracer method is that the average annual rate of ground-water recharge over a period of time can be calculated from the depth of the peak tritium concentration in the aquifer. Assuming that ground-water flow is vertically downward and that aquifer properties are reasonably homogeneous, and knowing the date of maximum tritium concentration in precipitation and the current depth to the tritium peak from the water table, the average recharge rate can be calculated. The method, which is a direct-measurement technique, was applied at two sites on Nantucket Island, Massachusetts. At site 1, the average annual recharge rate between 1964 and 1983 was 26.1 inches per year, or 68 percent of the average annual precipitation, and the estimated uncertainty is ?15 percent. At site 2, the multilevel water samplers were not constructed deep enough to determine the peak concentration of tritium in ground water. The tritium profile at site 2 resembles the upper part of the tritium profile at site 1 and indicates that the average recharge rate was at least 16 .7 inches per year, or at least 44 percent of the average annual precipitation. The Nantucket tritium recharge rates clearly are higher than rates determined elsewhere in southeastern Massachusetts using the tritium, water-table-fluctuation, and water-balance (Thornthwaite) methods, regardless of the method or the area. Because the recharge potential on Nantucket is so high (runoff is only 2 percent of the total water balance), the tritium recharge rates probably represent the effective upper limit for ground-water recharge in this region. The recharge-rate values used by Guswa and LeBlanc (1985) and LeBlanc (1984) in their ground-water-flow computer models of Cape Cod are 20 to 30 percent lower than this upper limit. The accuracy of the tritium method is dependent on two key factors: the accuracy of the effective-porosity data, and the sampling interval used at the site. For some sites, the need for recharge-rate data may require a determination as statistically accurate as that which can be provided by the tritium method. However, the tritium method is more costly and more time consuming than the other methods because numerous wells must be drilled and installed and because many water samples must be analyzed for tritium, to a very small level of analytical detection. For many sites, a less accurate, less expensive, and faster method of recharge-rate determination might be more satisfactory . The factor that most seriously limits the usefulness of the tritium tracer method is the current depth of the tritium peak. Water with peak concentrations of tritium entered the ground more than 20 years ago, and, according to the Nantucket data, that water now is more than 100 feet below the land surface. This suggests that the tracer method will work only in sand and gravel aquifers that are exceedingly thick by New England standards. Conversely, the results suggest that the method may work in areas where saturated thicknesses are less than 100 feet and the rate of vertical ground-water movement is relatively slow, such as in till and in silt- and clay-rich sand and gravel deposits.
Tidally induced turbulence in the Bermuda underwater cave-system
NASA Astrophysics Data System (ADS)
Molodtsov, S.; Anis, A.; Iliffe, T. M.
2016-02-01
This study presents results from field measurements of turbulence made in Bermuda's underwater cave-system. To the best of our knowledge, this is the first time that turbulence velocity measurements have been taken in an underwater cave-system. Water currents in caves are unaffected by surface waves and thus provide a unique opportunity to obtain clear signals of tidally induced turbulence. An acoustic Doppler velocimeter and acoustic Doppler current profiler were deployed in several cave locations during a period of six days. Power spectral density (PSD) of velocity fluctuations was estimated using the multitaper power spectral method. Turbulence kinetic energy dissipation rates, ɛ, were calculated based on the PSD and were found to exhibit a clear -5/3 slope within the inertial subrange. Measurement periods covered full diurnal cycles and estimates of ɛ showed a strong correlation with the tide phase with values up to 10-3 W/kg during peak ebb and flood (horizontal velocities up to 0.35 m/s). Furthermore, ɛ was found to closely follow the wall boundary layer parametrization, ɛ = u*3/(ᴋz), where u* is the friction velocity, ᴋ is von Karman's constant, and z is the height above the bed.
Discharging dynamics in an electrolytic cell
NASA Astrophysics Data System (ADS)
Feicht, Sarah E.; Frankel, Alexandra E.; Khair, Aditya S.
2016-07-01
We analyze the dynamics of a discharging electrolytic cell comprised of a binary symmetric electrolyte between two planar, parallel blocking electrodes. When a voltage is initially applied, ions in the electrolyte migrate towards the electrodes, forming electrical double layers. After the system reaches steady state and the external current decays to zero, the applied voltage is switched off and the cell discharges, with the ions eventually returning to a uniform spatial concentration. At voltages on the order of the thermal voltage VT=kBT /q ≃25 mV, where kB is Boltzmann's constant, T is temperature, and q is the charge of a proton, experiments on surfactant-doped nonpolar fluids observe that the temporal evolution of the external current during charging and discharging is not symmetric [V. Novotny and M. A. Hopper, J. Electrochem. Soc. 126, 925 (1979), 10.1149/1.2129195; P. Kornilovitch and Y. Jeon, J. Appl. Phys. 109, 064509 (2011), 10.1063/1.3554445]. In fact, at sufficiently large voltages (several VT), the current during discharging is no longer monotonic: it displays a "reverse peak" before decaying in magnitude to zero. We analyze the dynamics of discharging by solving the Poisson-Nernst-Planck equations governing ion transport via asymptotic and numerical techniques in three regimes. First, in the "linear regime" when the applied voltage V is formally much less than VT, the charging and discharging currents are antisymmetric in time; however, the potential and charge density profiles during charging and discharging are asymmetric. The current evolution is on the R C timescale of the cell, λDL /D , where L is the width of the cell, D is the diffusivity of ions, and λD is the Debye length. Second, in the (experimentally relevant) thin-double-layer limit ɛ =λD/L ≪1 , there is a "weakly nonlinear" regime defined by VT≲V ≲VTln(1 /ɛ ) , where the bulk salt concentration is uniform; thus the R C timescale of the evolution of the current magnitude persists. However, nonlinear, voltage-dependent, capacitance of the double layer is responsible for a break in temporal antisymmetry of the charging and discharging currents. Third, the reverse peak in the discharging current develops in a "strongly nonlinear" regime V ≳VTln(1 /ɛ ) , driven by neutral salt adsorption into the double layers and consequent bulk depletion during charging. The strongly nonlinear regime features current evolution over three timescales. The current decays in magnitude on the double layer relaxation timescale, λD2/D ; then grows exponentially in time towards the reverse peak on the diffusion timescale, L2/D , indicating that the reverse peak is the results of fast diffusion of ions from the double layer layer to the bulk. Following the reverse peak, the current decays exponentially to zero on the R C timescale. Notably, the current at the reverse peak and the time of the reverse peak saturate at large voltages V ≫VTln(1 /ɛ ) . We provide semi-analytic expressions for the saturated reverse peak time and current, which can be used to infer charge carrier diffusivity and concentration from experiments.
Kroncke, Brett M; Glazer, Andrew M; Smith, Derek K; Blume, Jeffrey D; Roden, Dan M
2018-05-01
Accurately predicting the impact of rare nonsynonymous variants on disease risk is an important goal in precision medicine. Variants in the cardiac sodium channel SCN5A (protein Na V 1.5; voltage-dependent cardiac Na+ channel) are associated with multiple arrhythmia disorders, including Brugada syndrome and long QT syndrome. Rare SCN5A variants also occur in ≈1% of unaffected individuals. We hypothesized that in vitro electrophysiological functional parameters explain a statistically significant portion of the variability in disease penetrance. From a comprehensive literature review, we quantified the number of carriers presenting with and without disease for 1712 reported SCN5A variants. For 356 variants, data were also available for 5 Na V 1.5 electrophysiological parameters: peak current, late/persistent current, steady-state V1/2 of activation and inactivation, and recovery from inactivation. We found that peak and late current significantly associate with Brugada syndrome ( P <0.001; ρ=-0.44; Spearman rank test) and long QT syndrome disease penetrance ( P <0.001; ρ=0.37). Steady-state V1/2 activation and recovery from inactivation associate significantly with Brugada syndrome and long QT syndrome penetrance, respectively. Continuous estimates of disease penetrance align with the current American College of Medical Genetics classification paradigm. Na V 1.5 in vitro electrophysiological parameters are correlated with Brugada syndrome and long QT syndrome disease risk. Our data emphasize the value of in vitro electrophysiological characterization and incorporating counts of affected and unaffected carriers to aid variant classification. This quantitative analysis of the electrophysiological literature should aid the interpretation of Na V 1.5 variant electrophysiological abnormalities and help improve Na V 1.5 variant classification. © 2018 American Heart Association, Inc.
Remote Sensing Characterization of Two-dimensional Wave Forcing in the Surf Zone
NASA Astrophysics Data System (ADS)
Carini, R. J.; Chickadel, C. C.; Jessup, A. T.
2016-02-01
In the surf zone, breaking waves drive longshore currents, transport sediment, shape bathymetry, and enhance air-sea gas and particle exchange. Furthermore, wave group forcing influences the generation and duration of rip currents. Wave breaking exhibits large gradients in space and time, making it challenging to measure in situ. Remote sensing technologies, specifically thermal infrared (IR) imagery, can provide detailed spatial and temporal measurements of wave breaking at the water surface. We construct two-dimensional maps of active wave breaking from IR imagery collected during the Surf Zone Optics Experiment in September 2010 at the US Army Corps of Engineers' Field Research Facility in Duck, NC. For each breaker identified in the camera's field of view, the crest-perpendicular length of the aerated breaking region (roller length) and wave direction are estimated and used to compute the wave energy dissipation rate. The resultant dissipation rate maps are analyzed over different time scales: peak wave period, infragravity wave period, and tidal wave period. For each time scale, spatial maps of wave breaking are used to characterize wave forcing in the surf zone for a variety of wave conditions. The following phenomena are examined: (1) wave dissipation rates over the bar (location of most intense breaking) have increased variance in infragravity wave frequencies, which are different from the peak frequency of the incoming wave field and different from the wave forcing variability at the shoreline, and (2) wave forcing has a wider spatial distribution during low tide than during high tide due to depth-limited breaking over the barred bathymetry. Future work will investigate the response of the variability in wave setup, longshore currents and rip currents, to the variability in wave forcing in the surf zone.
Weber, Martin; Motin, Leonid; Gaul, Simon; Beker, Friederike; Fink, Rainer H A; Adams, David J
2004-01-01
The effects of intravenous (i.v.) anaesthetics on nicotinic acetylcholine receptor (nAChR)-induced transients in intracellular free Ca2+ concentration ([Ca2+]i) and membrane currents were investigated in neonatal rat intracardiac neurons. In fura-2-loaded neurons, nAChR activation evoked a transient increase in [Ca2+]I, which was inhibited reversibly and selectively by clinically relevant concentrations of thiopental. The half-maximal concentration for thiopental inhibition of nAChR-induced [Ca2+]i transients was 28 μM, close to the estimated clinical EC50 (clinically relevant (half-maximal) effective concentration) of thiopental. In fura-2-loaded neurons, voltage clamped at −60 mV to eliminate any contribution of voltage-gated Ca2+ channels, thiopental (25 μM) simultaneously inhibited nAChR-induced increases in [Ca2+]i and peak current amplitudes. Thiopental inhibited nAChR-induced peak current amplitudes in dialysed whole-cell recordings by ∼ 40% at −120, −80 and −40 mV holding potential, indicating that the inhibition is voltage independent. The barbiturate, pentobarbital and the dissociative anaesthetic, ketamine, used at clinical EC50 were also shown to inhibit nAChR-induced increases in [Ca2+]i by ∼40%. Thiopental (25 μM) did not inhibit caffeine-, muscarine- or ATP-evoked increases in [Ca2+]i, indicating that inhibition of Ca2+ release from internal stores via either ryanodine receptor or inositol-1,4,5-trisphosphate receptor channels is unlikely. Depolarization-activated Ca2+ channel currents were unaffected in the presence of thiopental (25 μM), pentobarbital (50 μM) and ketamine (10 μM). In conclusion, i.v. anaesthetics inhibit nAChR-induced currents and [Ca2+]i transients in intracardiac neurons by binding to nAChRs and thereby may contribute to changes in heart rate and cardiac output under clinical conditions. PMID:15644873
Methodology and Implications of Maximum Paleodischarge Estimates for
Channels, M.; Pruess, J.; Wohl, E.E.; Jarrett, R.D.
1998-01-01
Historical and geologic records may be used to enhance magnitude estimates for extreme floods along mountain channels, as demonstrated in this study from the San Juan Mountains of Colorado. Historical photographs and local newspaper accounts from the October 1911 flood indicate the likely extent of flooding and damage. A checklist designed to organize and numerically score evidence of flooding was used in 15 field reconnaissance surveys in the upper Animas River valley of southwestern Colorado. Step-backwater flow modeling estimated the discharges necessary to create longitudinal flood bars observed at 6 additional field sites. According to these analyses, maximum unit discharge peaks at approximately 1.3 m3 s~' km"2 around 2200 m elevation, with decreased unit discharges at both higher and lower elevations. These results (1) are consistent with Jarrett's (1987, 1990, 1993) maximum 2300-m elevation limit for flash-flooding in the Colorado Rocky Mountains, and (2) suggest that current Probable Maximum Flood (PMF) estimates based on a 24-h rainfall of 30 cm at elevations above 2700 m are unrealistically large. The methodology used for this study should be readily applicable to other mountain regions where systematic streamflow records are of short duration or nonexistent. ?? 1998 Regents of the University of Colorado.
Tritium as an indicator of ground-water age in Central Wisconsin
Bradbury, Kenneth R.
1991-01-01
In regions where ground water is generally younger than about 30 years, developing the tritium input history of an area for comparison with the current tritium content of ground water allows quantitative estimates of minimum ground-water age. The tritium input history for central Wisconsin has been constructed using precipitation tritium measured at Madison, Wisconsin and elsewhere. Weighted tritium inputs to ground water reached a peak of over 2,000 TU in 1964, and have declined since that time to about 20-30 TU at present. In the Buena Vista basin in central Wisconsin, most ground-water samples contained elevated levels of tritium, and estimated minimum ground-water ages in the basin ranged from less than one year to over 33 years. Ground water in mapped recharge areas was generally younger than ground water in discharge areas, and estimated ground-water ages were consistent with flow system interpretations based on other data. Estimated minimum ground-water ages increased with depth in areas of downward ground-water movement. However, water recharging through thick moraine sediments was older than water in other recharge areas, reflecting slower infiltration through the sandy till of the moraine.
Boudaghpour, Siamak; Bagheri, Majid; Bagheri, Zahra
2014-01-01
High flood occurrences with large environmental damages have a growing trend in Iran. Dynamic movements of water during a flood cause different environmental damages in geographical areas with different characteristics such as topographic conditions. In general, environmental effects and damages caused by a flood in an area can be investigated from different points of view. The current essay is aiming at detecting environmental effects of flood occurrences in Halilrood catchment area of Kerman province in Iran using flood zone mapping techniques. The intended flood zone map was introduced in four steps. Steps 1 to 3 pave the way to calculate and estimate flood zone map in the understudy area while step 4 determines the estimation of environmental effects of flood occurrence. Based on our studies, wide range of accuracy for estimating the environmental effects of flood occurrence was introduced by using of flood zone mapping techniques. Moreover, it was identified that the existence of Jiroft dam in the study area can decrease flood zone from 260 hectares to 225 hectares and also it can decrease 20% of flood peak intensity. As a result, 14% of flood zone in the study area can be saved environmentally.
Resolving Peak Ground Displacements in Real-Time GNSS PPP Solutions
NASA Astrophysics Data System (ADS)
Hodgkinson, K. M.; Mencin, D.; Mattioli, G. S.; Sievers, C.; Fox, O.
2017-12-01
The goal of early earthquake warning (EEW) systems is to provide warning of impending ground shaking to the public, infrastructure managers, and emergency responders. Shaking intensity can be estimated using Ground Motion Prediction Equations (GMPEs), but only if site characteristics, hypocentral distance and event magnitude are known. In recent years work has been done analyzing the first few seconds of the seismic P wave to derive event location and magnitude. While initial rupture locations seem to be sufficiently constrained, it has been shown that P-wave magnitude estimates tend to saturate at M>7. Regions where major and great earthquakes occur may therefore be vulnerable to an underestimation of shaking intensity if only P waves magnitudes are used. Crowell et al., (2013) first demonstrated that Peak Ground Displacement (PGD) from long-period surface waves recorded by GNSS receivers could provide a source-scaling relation that does not saturate with event magnitude. GNSS PGD derived magnitudes could improve the accuracy of EEW GMPE calculations. If such a source-scaling method were to be implemented in EEW algorithms it is critical that the noise levels in real-time GNSS processed time-series are low enough to resolve long-period surface waves. UNAVCO currently operates 770 real-time GNSS sites, most of which are located along the North American-Pacific Plate Boundary. In this study, we present an analysis of noise levels observed in the GNSS Precise Point Positioning (PPP) solutions generated and distributed in real-time by UNAVCO for periods from seconds to hours. The analysis is performed using the 770 sites in the real-time network and data collected through July 2017. We compare noise levels determined from various monument types and receiver-antenna configurations. This analysis gives a robust estimation of noise levels in PPP solutions because the solutions analyzed are those that were generated in real-time and thus contain all the problems observed in routine network operations e.g., data outages, high latencies and data from research-quality to less ideal monumentation. Using these noise estimates we can identify which sites are best able to resolve the PGDs for earthquakes over a range of focal distances and those that may not using their current configurations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Nihar; Wei, Max; Letschert, Virginie
2015-10-01
Hydrofluorocarbons (HFCs) emitted from uses such as refrigerants and thermal insulating foam, are now the fastest growing greenhouse gases (GHGs), with global warming potentials (GWP) thousands of times higher than carbon dioxide (CO2). Because of the short lifetime of these molecules in the atmosphere, mitigating the amount of these short-lived climate pollutants (SLCPs) provides a faster path to climate change mitigation than control of CO2 alone. This has led to proposals from Africa, Europe, India, Island States, and North America to amend the Montreal Protocol on Substances that Deplete the Ozone Layer (Montreal Protocol) to phase-down high-GWP HFCs. Simultaneously, energymore » efficiency market transformation programs such as standards, labeling and incentive programs are endeavoring to improve the energy efficiency for refrigeration and air conditioning equipment to provide life cycle cost, energy, GHG, and peak load savings. In this paper we provide an estimate of the magnitude of such GHG and peak electric load savings potential, for room air conditioning, if the refrigerant transition and energy efficiency improvement policies are implemented either separately or in parallel. We find that implementing HFC refrigerant transition and energy efficiency improvement policies in parallel for room air conditioning, roughly doubles the benefit of either policy implemented separately. We estimate that shifting the 2030 world stock of room air conditioners from the low efficiency technology using high-GWP refrigerants to higher efficiency technology and low-GWP refrigerants in parallel would save between 340-790 gigawatts (GW) of peak load globally, which is roughly equivalent to avoiding 680-1550 peak power plants of 500MW each. This would save 0.85 GT/year annually in China equivalent to over 8 Three Gorges dams and over 0.32 GT/year annually in India equivalent to roughly twice India’s 100GW solar mission target. While there is some uncertainty associated with emissions and growth projections, moving to efficient room air conditioning (~30% more efficient than current technology) in parallel with low-GWP refrigerants in room air conditioning could avoid up to ~25 billion tonnes of CO2 in 2030, ~33 billion in 2040, and ~40 billion in 2050, i.e. cumulative savings up to 98 billion tonnes of CO2 by 2050. Therefore, superefficient room ACs using low-GWP refrigerants merit serious consideration to maximize peak load reduction and GHG savings.« less
Allometric modelling of peak oxygen uptake in male soccer players of 8-18 years of age.
Valente-Dos-Santos, João; Coelho-E-Silva, Manuel J; Tavares, Óscar M; Brito, João; Seabra, André; Rebelo, António; Sherar, Lauren B; Elferink-Gemser, Marije T; Malina, Robert M
2015-03-01
Peak oxygen uptake (VO2peak) is routinely scaled as mL O2 per kilogram body mass despite theoretical and statistical limitations of using ratios. To examine the contribution of maturity status and body size descriptors to age-associated inter-individual variability in VO2peak and to present static allometric models to normalize VO2peak in male youth soccer players. Total body and estimates of total and regional lean mass were measured with dual energy X-ray absorptiometry in a cross-sectional sample of Portuguese male soccer players. The sample was divided into three age groups for analysis: 8-12 years, 13-15 years and 16-18 years. VO2peak was estimated using an incremental maximal exercise test on a motorized treadmill. Static allometric models were used to normalize VO2peak. The independent variables with the best statistical fit explained 72% in the younger group (lean body mass: k = 1.07), 52% in mid-adolescent players (lean body mass: k = 0.93) and 31% in the older group (body mass: k = 0.51) of variance in VO2peak. The inclusion of the exponential term pubertal status marginally increased the explained variance in VO2peak (adjusted R(2 )= 36-75%) and provided statistical adjustments to the size descriptors coefficients. The allometric coefficients and exponents evidenced the varying inter-relationship among size descriptors and maturity status with aerobic fitness from early to late-adolescence. Lean body mass, lean lower limbs mass and body mass combined with pubertal status explain most of the inter-individual variability in VO2peak among youth soccer players.
GNSS-derived Geocenter Coordinates Viewed by Perturbation Theory
NASA Astrophysics Data System (ADS)
Meindl, Michael; Beutler, Gerhard; Thaller, Daniela; Dach, Rolf; Jäggi, Adrian; Rothacher, Markus
2013-04-01
Time series of geocenter coordinates were determined with data of the two global navigation satellite systems (GNSS) GPS and GLONASS. The data was recorded in the years 2008-2011 by a global network of 92 combined GPS/GLONASS receivers. Two types of solutions were generated for each system, one including the estimation of geocenter coordinates and one without these parameters. A fair agreement for GPS and GLONASS estimates was found in the x- and y-coordinate series of the geocenter. Artifacts do, however, clearly show up in the z-coordinate. Large periodic variations in the GLONASS geocenter z-coordinates of about 40 cm peak-to-peak are related to the maximum elevation angles of the Sun above/below the orbital planes of the satellite system. A detailed analysis revealed that these artifacts are almost uniquely governed by the differences of the estimates of direct solar radiation pressure (SRP) in the two solution series (with and without geocenter estimation). This effect can be explained by first-order perturbation theory of celestial mechanics. The relation between the geocenter z-coordinate and the corresponding SRP parameters will be presented. Our theory is applicable to all satellite observing techniques. In addition to GNSS, we applied it to satellite laser ranging (SLR) solutions based on LAGEOS observations. The correlation between geocenter and SRP parameters is not a critical issue for SLR, because these parameters do not have to be estimated. This basic difference between SLR and GNSS analyses explains why SLR is an excellent tool to determine geodetic datum parameters like the geocenter coordinates. The correlation between orbit parameters and the z-component of the geocenter is not limited to a particular orbit model, e.g., that of CODE. The issue should be studied for alternative (e.g., box-wing) models: As soon as non-zero mean values (over one revolution) of the out-of-plane force component exist, one has to expect biased geocenter estimates. The insights gained here should be seriously taken into account in the orbit modeling discussion currently taking place within the IGS.
Motion estimation in the frequency domain using fuzzy c-planes clustering.
Erdem, C E; Karabulut, G Z; Yanmaz, E; Anarim, E
2001-01-01
A recent work explicitly models the discontinuous motion estimation problem in the frequency domain where the motion parameters are estimated using a harmonic retrieval approach. The vertical and horizontal components of the motion are independently estimated from the locations of the peaks of respective periodogram analyses and they are paired to obtain the motion vectors using a procedure proposed. In this paper, we present a more efficient method that replaces the motion component pairing task and hence eliminates the problems of the pairing method described. The method described in this paper uses the fuzzy c-planes (FCP) clustering approach to fit planes to three-dimensional (3-D) frequency domain data obtained from the peaks of the periodograms. Experimental results are provided to demonstrate the effectiveness of the proposed method.
Brown, Craig J.; Mullaney, John R.; Morrison, Jonathan; Martin, Joseph W.; Trombley, Thomas J.
2015-07-01
The addition of a lane mile in both directions on I–95 would result in an estimate of approximately 2 to 11 percent increase in Cl- input from deicers applied to I–95 and other roads maintained by Connecticut Department of Transportation. The largest estimated increase in Cl- load was in the watersheds with the greatest number miles of I–95 corridor relative to the total lane miles maintained by Connecticut Department of Transportation. On the basis of these estimates and the estimated peak Cl- concentrations during the study period, it is unlikely that the increased use of deicers on the additional lanes would lead to Cl- concentrations that exceed the aquatic habitat criteria.
Estimating the magnitude and frequency of floods in urban basins in Missouri
Southard, Rodney E.
2010-01-01
Streamgage flood-frequency analyses were done for 35 streamgages on urban streams in and adjacent to Missouri for estimation of the magnitude and frequency of floods in urban areas of Missouri. A log-Pearson Type-III distribution was fitted to the annual series of peak flow data retrieved from the U.S. Geological Survey National Water Information System. For this report, the flood frequency estimates are expressed in terms of percent annual exceedance probabilities of 50, 20, 10, 4, 2, 1, and 0.2. Of the 35 streamgages, 30 are located in Missouri. The remaining five non-Missouri streamgages were added to the dataset to improve the range and applicability of the regression analyses from the streamgage frequency analyses. Ordinary least-squares was used to determine the best set of independent variables for the regression equations. Basin characteristics selected for independent variables into the ordinary least-squares regression analyses were based on theoretical relation to flood flows, literature review of possible basin characteristics, and the ability to measure the basin characteristics using digital datasets and geographic information system technology. Results of the ordinary least-squares were evaluated on the basis of Mallow's Cp statistic, the adjusted coefficient of determination, and the statistical significance of the independent variables. The independent variables of drainage area and percent impervious area were determined to be statistically significant and readily determined from existing digital datasets. The drainage area variable was computed using the best elevation data available, either from a statewide 10-meter grid or high-resolution elevation data from urban areas. The impervious area variable was computed from the National Land Cover Dataset 2001 impervious area dataset. The National Land Cover Dataset 2001 impervious area data for each basin was compared to historical imagery and 7.5-minute topographic maps to verify the national dataset represented the urbanization of the basin at the time streamgage data were collected. Eight streamgages had less urbanization during the period of time streamflow data were collected than was shown on the 2001 dataset. The impervious area values for these eight urban basins were adjusted downward as much as 23 percent to account for the additional urbanization since the streamflow data were collected. Weighted least-squares regression techniques were used to determine the final regression equations for the statewide urban flood-frequency equations. Weighted least-squares techniques improve regression equations by adjusting for different and varying lengths in streamflow records. The final flood-frequency equations for the 50-, 20-, 10-, 4-, 2-, 1-, and 0.2-percent annual exceedance probability floods for Missouri provide a technique for estimating peak flows on urban streams at gaged and ungaged sites. The applicability of the equations is limited by the range in basin characteristics used to develop the regression equations. The range in drainage area is 0.28 to 189 square miles; range in impervious area is 2.3 to 46.0 percent. Seven of the 35 selected streamgages were used to compare the results of the existing rural and urban equations to the urban equations presented in this report for the 1-percent annual exceedance probability. Results of the comparison indicate that the estimated peak flows for the urban equation in this report ranged from 3 to 52 percent higher than the results from the rural equations. Comparing the estimated urban peak flows from this report to the existing urban equation developed in 1986 indicated the range was 255 percent lower to 10 percent higher. The overall comparison between the current (2010) and 1986 urban equations indicates a reduction in estimated peak flow values for the 1-percent annual exceedance probability flood.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoltzfus-Dueck, T.; Scott, B.
An often-neglected portion of the radialmore » $$\\boldsymbol{E}\\times \\boldsymbol{B}$$ drift is shown to drive an outward flux of co-current momentum when free energy is transferred from the electrostatic potential to ion parallel flows. This symmetry breaking is fully nonlinear, not quasilinear, necessitated simply by free-energy balance in parameter regimes for which significant energy is dissipated via ion parallel flows. The resulting rotation peaking is counter-current and has a scaling and order of magnitude that are comparable with experimental observations. Finally, the residual stress becomes inactive when frequencies are much higher than the ion transit frequency, which may explain the observed relation of density peaking and counter-current rotation peaking in the core.« less
Quantifying fall migration of Ross's gulls (Rhodostethia rosea) past Point Barrow, Alaska
Uher-Koch, Brian D.; Davis, Shanti E.; Maftei, Mark; Gesmundo, Callie; Suydam, R.S.; Mallory, Mark L.
2014-01-01
The Ross's gull (Rhodostethia rosea) is a poorly known seabird of the circumpolar Arctic. The only place in the world where Ross's gulls are known to congregate is in the near-shore waters around Point Barrow, Alaska where they undertake an annual passage in late fall. Ross's gulls seen at Point Barrow are presumed to originate from nesting colonies in Siberia, but neither their origin nor their destination has been confirmed. Current estimates of the global population of Ross's gulls are based largely on expert opinion, and the only reliable population estimate is derived from extrapolations from previous counts conducted at Point Barrow, but these data are now over 25 years old. In order to update and clarify the status of this species in Alaska, our study quantified the timing, number, and flight direction of Ross's gulls passing Point Barrow in 2011. We recorded up to two-thirds of the estimated global population of Ross's gulls (≥ 27,000 individuals) over 39 days with numbers peaking on 16 October when we observed over 7,000 birds during a three-hour period.
Launch pad lightning protection effectiveness
NASA Technical Reports Server (NTRS)
Stahmann, James R.
1991-01-01
Using the striking distance theory that lightning leaders will strike the nearest grounded point on their last jump to earth corresponding to the striking distance, the probability of striking a point on a structure in the presence of other points can be estimated. The lightning strokes are divided into deciles having an average peak current and striking distance. The striking distances are used as radii from the points to generate windows of approach through which the leader must pass to reach a designated point. The projections of the windows on a horizontal plane as they are rotated through all possible angles of approach define an area that can be multiplied by the decile stroke density to arrive at the probability of strokes with the window average striking distance. The sum of all decile probabilities gives the cumulative probability for all strokes. The techniques can be applied to NASA-Kennedy launch pad structures to estimate the lightning protection effectiveness for the crane, gaseous oxygen vent arm, and other points. Streamers from sharp points on the structure provide protection for surfaces having large radii of curvature. The effects of nearby structures can also be estimated.
Ryberg, Karen R.
2006-01-01
This report presents the results of a study by the U.S. Geological Survey, done in cooperation with the Bureau of Reclamation, U.S. Department of the Interior, to estimate water-quality constituent concentrations in the Red River of the North at Fargo, North Dakota. Regression analysis of water-quality data collected in 2003-05 was used to estimate concentrations and loads for alkalinity, dissolved solids, sulfate, chloride, total nitrite plus nitrate, total nitrogen, total phosphorus, and suspended sediment. The explanatory variables examined for regression relation were continuously monitored physical properties of water-streamflow, specific conductance, pH, water temperature, turbidity, and dissolved oxygen. For the conditions observed in 2003-05, streamflow was a significant explanatory variable for all estimated constituents except dissolved solids. pH, water temperature, and dissolved oxygen were not statistically significant explanatory variables for any of the constituents in this study. Specific conductance was a significant explanatory variable for alkalinity, dissolved solids, sulfate, and chloride. Turbidity was a significant explanatory variable for total phosphorus and suspended sediment. For the nutrients, total nitrite plus nitrate, total nitrogen, and total phosphorus, cosine and sine functions of time also were used to explain the seasonality in constituent concentrations. The regression equations were evaluated using common measures of variability, including R2, or the proportion of variability in the estimated constituent explained by the regression equation. R2 values ranged from 0.703 for total nitrogen concentration to 0.990 for dissolved-solids concentration. The regression equations also were evaluated by calculating the median relative percentage difference (RPD) between measured constituent concentration and the constituent concentration estimated by the regression equations. Median RPDs ranged from 1.1 for dissolved solids to 35.2 for total nitrite plus nitrate. Regression equations also were used to estimate daily constituent loads. Load estimates can be used by water-quality managers for comparison of current water-quality conditions to water-quality standards expressed as total maximum daily loads (TMDLs). TMDLs are a measure of the maximum amount of chemical constituents that a water body can receive and still meet established water-quality standards. The peak loads generally occurred in June and July when streamflow also peaked.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trushnikov, D. N., E-mail: trdimitr@yandex.ru; Mladenov, G. M., E-mail: gmmladenov@abv.bg; Koleva, E. G., E-mail: eligeorg@abv.bg
Many papers have sought correlations between the parameters of secondary particles generated above the beam/work piece interaction zone, dynamics of processes in the keyhole, and technological processes. Low- and high-frequency oscillations of the current, collected by plasma have been observed above the welding zone during electron beam welding. Low-frequency oscillations of secondary signals are related to capillary instabilities of the keyhole, however; the physical mechanisms responsible for the high-frequency oscillations (>10 kHz) of the collected current are not fully understood. This paper shows that peak frequencies in the spectra of the collected high-frequency signal are dependent on the reciprocal distancemore » between the welding zone and collector electrode. From the relationship between current harmonics frequency and distance of the collector/welding zone, it can be estimated that the draft velocity of electrons or phase velocity of excited waves is about 1600 m/s. The dispersion relation with the properties of ion-acoustic waves is related to electron temperature 10 000 K, ion temperature 2 400 K and plasma density 10{sup 16} m{sup −3}, which is analogues to the parameters of potential-relaxation instabilities, observed in similar conditions. The estimated critical density of the transported current for creating the anomalous resistance state of plasma is of the order of 3 A·m{sup −2}, i.e. 8 mA for a 3–10 cm{sup 2} collector electrode. Thus, it is assumed that the observed high-frequency oscillations of the current collected by the positive collector electrode are caused by relaxation processes in the plasma plume above the welding zone, and not a direct demonstration of oscillations in the keyhole.« less
NASA Astrophysics Data System (ADS)
Trushnikov, D. N.; Mladenov, G. M.; Belenkiy, V. Ya.; Koleva, E. G.; Varushkin, S. V.
2014-04-01
Many papers have sought correlations between the parameters of secondary particles generated above the beam/work piece interaction zone, dynamics of processes in the keyhole, and technological processes. Low- and high-frequency oscillations of the current, collected by plasma have been observed above the welding zone during electron beam welding. Low-frequency oscillations of secondary signals are related to capillary instabilities of the keyhole, however; the physical mechanisms responsible for the high-frequency oscillations (>10 kHz) of the collected current are not fully understood. This paper shows that peak frequencies in the spectra of the collected high-frequency signal are dependent on the reciprocal distance between the welding zone and collector electrode. From the relationship between current harmonics frequency and distance of the collector/welding zone, it can be estimated that the draft velocity of electrons or phase velocity of excited waves is about 1600 m/s. The dispersion relation with the properties of ion-acoustic waves is related to electron temperature 10 000 K, ion temperature 2 400 K and plasma density 1016 m-3, which is analogues to the parameters of potential-relaxation instabilities, observed in similar conditions. The estimated critical density of the transported current for creating the anomalous resistance state of plasma is of the order of 3 A.m-2, i.e. 8 mA for a 3-10 cm2 collector electrode. Thus, it is assumed that the observed high-frequency oscillations of the current collected by the positive collector electrode are caused by relaxation processes in the plasma plume above the welding zone, and not a direct demonstration of oscillations in the keyhole.
Waythomas, C.F.; Walder, J.S.; McGimsey, R.G.; Neal, C.A.
1996-01-01
Aniakchak caldera, located on the Alaska Peninsula of southwest Alaska, formerly contained a large lake (estimated volume 3.7 ?? 109 m3) that rapidly drained as a result of failure of the caldera rim sometime after ca. 3400 yr B.P. The peak discharge of the resulting flood was estimated using three methods: (1) flow-competence equations, (2) step-backwater modeling, and (3) a dam-break model. The results of the dam-break model indicate that the peak discharge at the breach in the caldera rim was at least 7.7 ?? 104 m3 s-1, and the maximum possible discharge was ???1.1 ?? 106 m3 s-1. Flow-competence estimates of discharge, based on the largest boulders transported by the flood, indicate that the peak discharge values, which were a few kilometers downstream of the breach, ranged from 6.4 ?? 105 to 4.8 ?? 106 m3 s-1. Similar but less variable results were obtained by step-backwater modeling. Finally, discharge estimates based on regression equations relating peak discharge to the volume and depth of the impounded water, although limited by constraining assumptions, provide results within the range of values determined by the other methods. The discovery and documentation of a flood, caused by the failure of the caldera rim at Aniakchak caldera, underscore the significance and associated hydrologic hazards of potential large floods at other lake-filled calderas.
Instantaneous polarization analysis of ambient noise recordings in site response investigations
NASA Astrophysics Data System (ADS)
Del Gaudio, Vincenzo
2017-07-01
A new procedure is proposed for analyses of ambient noise aimed at investigating complex cases of site response to seismic shaking. Information on site response characterized by several resonance frequencies and by amplifications varying with direction can be obtained by analysing instantaneous polarization properties of ambient noise recordings. Through this kind of analysis, it is possible to identify Rayleigh wave packets emerging from incoherent background noise for very short intervals. Analysing noise recordings passed through narrow-band filters with different central frequencies, variations of Rayleigh wave properties depending on frequencies can be estimated. In particular, one can calculate: (i) the instantaneous ratios H/V between the amplitudes of horizontal and vertical components of the elliptical particle motion and (ii) the azimuthal direction of the vertical plane containing such a motion. These can be determined on a large number of recording samples, providing the basis for statistical estimates. A preferential concentration of H/V peak values at site-specific frequencies and directions can reveal directional resonance phenomena. Furthermore, peak amplitudes can be related to site amplification factors and provide constraints for subsurface velocity modelling. Some tests, carried out on data acquired at sites with known response properties, gave indications on how to select the parameters of the analysis that optimize its implementation. In particular, preliminary trials, conducted on a limited number of frequencies, allow the selection of the parameters that, while providing a large number of instantaneous H/V estimates for Rayleigh waves, minimize their scattering. The analysis can then be refined and an H/V curve as function of frequency can be obtained with a higher spectral resolution. First tests showed that cases of directional resonance can be more effectively recognized with this technique and more details can be revealed on its properties (e.g. secondary peaks) in comparison to the Nakamura's method currently employed for ordinary noise analysis. For sites characterized by isotropic response or by differently oriented directional maxima, however, the presence of noise sources with an anisotropic spatial distribution, which excite signals with inhomogeneous distribution of energy through the examined spectral band, can make the correct interpretation of data more difficult.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaikovsky, S. A.; Datsko, I. M.; Labetskaya, N. A.
The paper presents the results of an experimental study of the skin explosion of cylindrical conductors of diameter 1–3 mm (copper, aluminum, titanium, steel 3, and stainless steel) at a peak magnetic field of 200–600 T. The experiments were carried out on the MIG pulsed power generator at a current of up to 2.5 MA and a current rise time of 100 ns. The surface explosion of a conductor was identified by the appearance of a flash of extreme ultraviolet radiation. A minimum magnetic induction has been determined below which no plasma is generated at the conductor surface. For copper, aluminum, steel 3,more » titanium, and stainless steel, the minimum magnetic induction has been estimated to be (to within 10%) 375, 270, 280, 220, and 245 T, respectively.« less
Terahertz radiation-induced sub-cycle field electron emission across a split-gap dipole antenna
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jingdi; Averitt, Richard D., E-mail: xinz@bu.edu, E-mail: raveritt@ucsd.edu; Department of Physics, Boston University, Boston, Massachusetts 02215
We use intense terahertz pulses to excite the resonant mode (0.6 THz) of a micro-fabricated dipole antenna with a vacuum gap. The dipole antenna structure enhances the peak amplitude of the in-gap THz electric field by a factor of ∼170. Above an in-gap E-field threshold amplitude of ∼10 MV/cm{sup −1}, THz-induced field electron emission is observed as indicated by the field-induced electric current across the dipole antenna gap. Field emission occurs within a fraction of the driving THz period. Our analysis of the current (I) and incident electric field (E) is in agreement with a Millikan-Lauritsen analysis where log (I) exhibits amore » linear dependence on 1/E. Numerical estimates indicate that the electrons are accelerated to a value of approximately one tenth of the speed of light.« less
Kimura, Daiju; Kurisu, Yosuke; Nozaki, Dai; Yano, Keisuke; Imai, Youta; Kumakura, Sho; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki
2014-02-01
We are constructing a tandem type ECRIS. The first stage is large-bore with cylindrically comb-shaped magnet. We optimize the ion beam current and ion saturation current by a mobile plate tuner. They change by the position of the plate tuner for 2.45 GHz, 11-13 GHz, and multi-frequencies. The peak positions of them are close to the position where the microwave mode forms standing wave between the plate tuner and the extractor. The absorbed powers are estimated for each mode. We show a new guiding principle, which the number of efficient microwave mode should be selected to fit to that of multipole of the comb-shaped magnets. We obtained the excitation of the selective modes using new mobile plate tuner to enhance ECR efficiency.
Estimation of Confined Peak Strength of Crack-Damaged Rocks
NASA Astrophysics Data System (ADS)
Bahrani, Navid; Kaiser, Peter K.
2017-02-01
It is known that the unconfined compressive strength of rock decreases with increasing density of geological features such as micro-cracks, fractures, and veins both at the laboratory specimen and rock block scales. This article deals with the confined peak strength of laboratory-scale rock specimens containing grain-scale strength dominating features such as micro-cracks. A grain-based distinct element model, whereby the rock is simulated with grains that are allowed to deform and break, is used to investigate the influence of the density of cracks on the rock strength under unconfined and confined conditions. A grain-based specimen calibrated to the unconfined and confined strengths of intact and heat-treated Wombeyan marble is used to simulate rock specimens with varying crack densities. It is demonstrated how such cracks affect the peak strength, stress-strain curve and failure mode with increasing confinement. The results of numerical simulations in terms of unconfined and confined peak strengths are used to develop semi-empirical relations that relate the difference in strength between the intact and crack-damaged rocks to the confining pressure. It is shown how these relations can be used to estimate the confined peak strength of a rock with micro-cracks when the unconfined and confined strengths of the intact rock and the unconfined strength of the crack-damaged rock are known. This approach for estimating the confined strength of crack-damaged rock specimens, called strength degradation approach, is then verified by application to published laboratory triaxial test data.
Kohn, Michael S.; Stevens, Michael R.; Harden, Tessa M.; Godaire, Jeanne E.; Klinger, Ralph E.; Mommandi, Amanullah
2016-09-09
The U.S. Geological Survey (USGS), in cooperation with the Colorado Department of Transportation, developed regional-regression equations for estimating the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, 0.2-percent annual exceedance-probability discharge (AEPD) for natural streamflow in eastern Colorado. A total of 188 streamgages, consisting of 6,536 years of record and a mean of approximately 35 years of record per streamgage, were used to develop the peak-streamflow regional-regression equations. The estimated AEPDs for each streamgage were computed using the USGS software program PeakFQ. The AEPDs were determined using systematic data through water year 2013. Based on previous studies conducted in Colorado and neighboring States and on the availability of data, 72 characteristics (57 basin and 15 climatic characteristics) were evaluated as candidate explanatory variables in the regression analysis. Paleoflood and non-exceedance bound ages were established based on reconnaissance-level methods. Multiple lines of evidence were used at each streamgage to arrive at a conclusion (age estimate) to add a higher degree of certainty to reconnaissance-level estimates. Paleoflood or nonexceedance bound evidence was documented at 41 streamgages, and 3 streamgages had previously collected paleoflood data.To determine the peak discharge of a paleoflood or non-exceedanc bound, two different hydraulic models were used.The mean standard error of prediction (SEP) for all 8 AEPDs was reduced approximately 25 percent compared to the previous flood-frequency study. For paleoflood data to be effective in reducing the SEP in eastern Colorado, a larger ratio than 44 of 188 (23 percent) streamgages would need paleoflood data and that paleoflood data would need to increase the record length by more than 25 years for the 1-percent AEPD. The greatest reduction in SEP for the peak-streamflow regional-regression equations was observed when additional new basin characteristics were included in the peak-streamflow regional-regression equations and when eastern Colorado was divided into two separate hydrologic regions. To make further reductions in the uncertainties of the peak-streamflow regional-regression equations in the Foothills and Plains hydrologic regions, additional streamgages or crest-stage gages are needed to collect peak-streamflow data on natural streams in eastern Colorado.Generalized-Least Squares regression was used to compute the final peak-streamflow regional-regression equations for peak-streamflow. Dividing eastern Colorado into two new individual regions at –104° longitude resulted in peak-streamflow regional-regression equations with the smallest SEP. The new hydrologic region located between –104° longitude and the Kansas-Nebraska State line will be designated the Plains hydrologic region and the hydrologic region comprising the rest of eastern Colorado located west of the –104° longitude and east of the Rocky Mountains and below 7,500 feet in the South Platte River Basin and below 9,000 feet in the Arkansas River Basin will be designated the Foothills hydrologic region.
Li, Lee; Bao, Chaobing; Feng, Xibo; Liu, Yunlong; Fochan, Lin
2013-02-01
For a compact and reliable nanosecond-pulse high-voltage generator (NPHVG), the specification parameter selection and potential usage of fast controllable state-solid switches have an important bearing on the optimal design. The NPHVG with closed transformer core and fast switching thyristor (FST) was studied in this paper. According to the analysis of T-type circuit, the expressions for the voltages and currents of the primary and secondary windings on the transformer core of NPHVG were deduced, and the theoretical maximum analysis was performed. For NPHVG, the rise-rate of turn-on current (di/dt) across a FST may exceed its transient rating. Both mean and maximum values of di/dt were determined by the leakage inductances of the transformer, and the difference is 1.57 times. The optimum winding ratio is helpful to getting higher voltage output with lower specification FST, especially when the primary and secondary capacitances have been established. The oscillation period analysis can be effectively used to estimate the equivalent leakage inductance. When the core saturation effect was considered, the maximum di/dt estimated from the oscillating period of the primary current is more accurate than one from the oscillating period of the secondary voltage. Although increasing the leakage inductance of NPHVG can decrease di/dt across FST, it may reduce the output peak voltage of the NPHVG.
Simulating pad-electrodes with high-definition arrays in transcranial electric stimulation
NASA Astrophysics Data System (ADS)
Kempe, René; Huang, Yu; Parra, Lucas C.
2014-04-01
Objective. Research studies on transcranial electric stimulation, including direct current, often use a computational model to provide guidance on the placing of sponge-electrode pads. However, the expertise and computational resources needed for finite element modeling (FEM) make modeling impractical in a clinical setting. Our objective is to make the exploration of different electrode configurations accessible to practitioners. We provide an efficient tool to estimate current distributions for arbitrary pad configurations while obviating the need for complex simulation software. Approach. To efficiently estimate current distributions for arbitrary pad configurations we propose to simulate pads with an array of high-definition (HD) electrodes and use an efficient linear superposition to then quickly evaluate different electrode configurations. Main results. Numerical results on ten different pad configurations on a normal individual show that electric field intensity simulated with the sampled array deviates from the solutions with pads by only 5% and the locations of peak magnitude fields have a 94% overlap when using a dense array of 336 electrodes. Significance. Computationally intensive FEM modeling of the HD array needs to be performed only once, perhaps on a set of standard heads that can be made available to multiple users. The present results confirm that by using these models one can now quickly and accurately explore and select pad-electrode montages to match a particular clinical need.
Regional regression of flood characteristics employing historical information
Tasker, Gary D.; Stedinger, J.R.
1987-01-01
Streamflow gauging networks provide hydrologic information for use in estimating the parameters of regional regression models. The regional regression models can be used to estimate flood statistics, such as the 100 yr peak, at ungauged sites as functions of drainage basin characteristics. A recent innovation in regional regression is the use of a generalized least squares (GLS) estimator that accounts for unequal station record lengths and sample cross correlation among the flows. However, this technique does not account for historical flood information. A method is proposed here to adjust this generalized least squares estimator to account for possible information about historical floods available at some stations in a region. The historical information is assumed to be in the form of observations of all peaks above a threshold during a long period outside the systematic record period. A Monte Carlo simulation experiment was performed to compare the GLS estimator adjusted for historical floods with the unadjusted GLS estimator and the ordinary least squares estimator. Results indicate that using the GLS estimator adjusted for historical information significantly improves the regression model. ?? 1987.
Magnus, Maria C.; Stigum, Hein; Håberg, Siri E.; Nafstad, Per; London, Stephanie J.; Nystad, Wenche
2015-01-01
Background The immediate postnatal period is the period of the fastest growth in the entire life span and a critical period for lung development. Therefore, it is interesting to examine the association between growth during this period and childhood respiratory disorders. Methods We examined the association of peak weight and height velocity to age 36 months with maternal report of current asthma at 36 months (n = 50,311), recurrent lower respiratory tract infections (LRTIs) by 36 months (n = 47,905) and current asthma at 7 years (n = 24,827) in the Norwegian Mother and Child Cohort Study. Peak weight and height velocity was calculated using the Reed1 model through multilevel mixed-effects linear regression. Multivariable log-binomial regression was used to calculate adjusted relative risks (adj.RR) and 95% confidence intervals (CI). We also conducted a sibling pair analysis using conditional logistic regression. Results Peak weight velocity was positively associated with current asthma at 36 months [adj.RR 1.22 (95%CI: 1.18, 1.26) per standard deviation (SD) increase], recurrent LRTIs by 36 months [adj.RR 1.14 (1.10, 1.19) per SD increase] and current asthma at 7 years [adj.RR 1.13 (95%CI: 1.07, 1.19) per SD increase]. Peak height velocity was not associated with any of the respiratory disorders. The positive association of peak weight velocity and asthma at 36 months remained in the sibling pair analysis. Conclusions Higher peak weight velocity, achieved during the immediate postnatal period, increased the risk of respiratory disorders. This might be explained by an influence on neonatal lung development, shared genetic/epigenetic mechanisms and/or environmental factors. PMID:25635872
Design and fabrication of low power GaAs/AlAs resonant tunneling diodes
NASA Astrophysics Data System (ADS)
Md Zawawi, Mohamad Adzhar; Missous, Mohamed
2017-12-01
A very low peak voltage GaAs/AlAs resonant tunneling diode (RTD) grown by molecular beam epitaxy (MBE) has been studied in detail. Excellent growth control with atomic-layer precision resulted in a peak voltage of merely 0.28 V (0.53 V) in forward (reverse) direction. The peak current density in forward bias is around 15.4 kA/cm2 with variation of within 7%. As for reverse bias, the peak current density is around 22.8 kA/cm2 with 4% variation which implies excellent scalability. In this work, we have successfully demonstrated the fabrication of a GaAs/AlAs RTD by using a conventional optical lithography and chemical wet-etching with very low peak voltage suitable for application in low dc input power RTD-based sub-millimetre wave oscillators.
Engel, Aaron J; Bashford, Gregory R
2015-08-01
Ultrasound based shear wave elastography (SWE) is a technique used for non-invasive characterization and imaging of soft tissue mechanical properties. Robust estimation of shear wave propagation speed is essential for imaging of soft tissue mechanical properties. In this study we propose to estimate shear wave speed by inversion of the first-order wave equation following directional filtering. This approach relies on estimation of first-order derivatives which allows for accurate estimations using smaller smoothing filters than when estimating second-order derivatives. The performance was compared to three current methods used to estimate shear wave propagation speed: direct inversion of the wave equation (DIWE), time-to-peak (TTP) and cross-correlation (CC). The shear wave speed of three homogeneous phantoms of different elastic moduli (gelatin by weight of 5%, 7%, and 9%) were measured with each method. The proposed method was shown to produce shear speed estimates comparable to the conventional methods (standard deviation of measurements being 0.13 m/s, 0.05 m/s, and 0.12 m/s), but with simpler processing and usually less time (by a factor of 1, 13, and 20 for DIWE, CC, and TTP respectively). The proposed method was able to produce a 2-D speed estimate from a single direction of wave propagation in about four seconds using an off-the-shelf PC, showing the feasibility of performing real-time or near real-time elasticity imaging with dedicated hardware.
Ultrasonic tracking of shear waves using a particle filter
Ingle, Atul N.; Ma, Chi; Varghese, Tomy
2015-01-01
Purpose: This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Methods: Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Results: Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. Conclusions: The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques. PMID:26520761
Improved test methods for determining lightning-induced voltages in aircraft
NASA Technical Reports Server (NTRS)
Crouch, K. E.; Plumer, J. A.
1980-01-01
A lumped parameter transmission line with a surge impedance matching that of the aircraft and its return lines was evaluated as a replacement for earlier current generators. Various test circuit parameters were evaluated using a 1/10 scale relative geometric model. Induced voltage response was evaluated by taking measurements on the NASA-Dryden Digital Fly by Wire F-8 aircraft. Return conductor arrangements as well as other circuit changes were also evaluated, with all induced voltage measurements being made on the same circuit for comparison purposes. The lumped parameter transmission line generates a concave front current wave with the peak di/dt near the peak of the current wave which is more representative of lightning. However, the induced voltage measurements when scaled by appropriate scale factors (peak current or di/dt) resulting from both techniques yield comparable results.
Mapping apparent stress and energy radiation over fault zones of major earthquakes
McGarr, A.; Fletcher, Joe B.
2002-01-01
Using published slip models for five major earthquakes, 1979 Imperial Valley, 1989 Loma Prieta, 1992 Landers, 1994 Northridge, and 1995 Kobe, we produce maps of apparent stress and radiated seismic energy over their fault surfaces. The slip models, obtained by inverting seismic and geodetic data, entail the division of the fault surfaces into many subfaults for which the time histories of seismic slip are determined. To estimate the seismic energy radiated by each subfault, we measure the near-fault seismic-energy flux from the time-dependent slip there and then multiply by a function of rupture velocity to obtain the corresponding energy that propagates into the far-field. This function, the ratio of far-field to near-fault energy, is typically less than 1/3, inasmuch as most of the near-fault energy remains near the fault and is associated with permanent earthquake deformation. Adding the energy contributions from all of the subfaults yields an estimate of the total seismic energy, which can be compared with independent energy estimates based on seismic-energy flux measured in the far-field, often at teleseismic distances. Estimates of seismic energy based on slip models are robust, in that different models, for a given earthquake, yield energy estimates that are in close agreement. Moreover, the slip-model estimates of energy are generally in good accord with independent estimates by others, based on regional or teleseismic data. Apparent stress is estimated for each subfault by dividing the corresponding seismic moment into the radiated energy. Distributions of apparent stress over an earthquake fault zone show considerable heterogeneity, with peak values that are typically about double the whole-earthquake values (based on the ratio of seismic energy to seismic moment). The range of apparent stresses estimated for subfaults of the events studied here is similar to the range of apparent stresses for earthquakes in continental settings, with peak values of about 8 MPa in each case. For earthquakes in compressional tectonic settings, peak apparent stresses at a given depth are substantially greater than corresponding peak values from events in extensional settings; this suggests that crustal strength, inferred from laboratory measurements, may be a limiting factor. Lower bounds on shear stresses inferred from the apparent stress distribution of the 1995 Kobe earthquake are consistent with tectonic-stress estimates reported by Spudich et al. (1998), based partly on slip-vector rake changes.
Asano, K; Masui, Y; Masuda, K; Fujinaga, T
2002-01-01
To evaluate the feasibility of noninvasive estimation of cardiac systolic function using transthoracic continuous-wave Doppler echocardiography in dogs with mitral regurgitation. Seven mongrel dogs with experimental mitral regurgitation were used. Left ventriculography and measurement of pulmonary capillary wedge pressure were performed under inhalational anaesthesia. A micromanometer-tipped catheter was placed into the left ventricle and transthoracic echocardiography was carried out. The peak rate of left ventricular pressure rise (peak dP/dt) was derived simultaneously by continuous-wave Doppler and manometer measurements. The Doppler-derived dP/dt was compared with the catheter-measured peak dP/dt in the dogs. Classification of the severity of mitral regurgitation in the dogs was as follows: 1+, 2 dogs; 2+, 1 dog; 3+, 2 dogs; 4+, 1 dog; and not examined, 1 dog. We were able to derive dP/dt from the transthoracic continuous-wave Doppler echocardiography in all dogs. Doppler-derived dP/dt had a significant correlation with the catheter-measured peak dP/dt (r = 0.90, P < 0.0001). It was demonstrated that transthoracic continuous-wave Doppler echocardiography is a feasible method of noninvasive estimation of cardiac systolic function in dogs with experimental mitral regurgitation and may have clinical usefulness in canine patients with spontaneous mitral regurgitation.
NASA Astrophysics Data System (ADS)
Wakimoto, Hiroki; Nakazawa, Haruo; Matsumoto, Takashi; Nabetani, Yoichi
2018-04-01
For P-i-N diodes implanted and activated with boron ions into a highly-resistive n-type Si substrate, it is found that there is a large difference in the leakage current between relatively low temperature furnace annealing (FA) and high temperature laser annealing (LA) for activation of the p-layer. Since electron trap levels in the n-type Si substrate is supposed to be affected, we report on Deep Level Transient Spectroscopy (DLTS) measurement results investigating what kinds of trap levels are formed. As a result, three kinds of electron trap levels are confirmed in the region of 1-4 μm from the p-n junction. Each DLTS peak intensity of the LA sample is smaller than that of the FA sample. In particular, with respect to the trap level which is the closest to the silicon band gap center most affecting the reverse leakage current, it was not detected in LA. It is considered that the electron trap levels are decreased due to the thermal energy of LA. On the other hand, four kinds of trap levels are confirmed in the region of 38-44 μm from the p-n junction and the DLTS peak intensities of FA and LA are almost the same, considering that the thermal energy of LA has not reached this area. The large difference between the reverse leakage current of FA and LA is considered to be affected by the deep trap level estimated to be the interstitial boron.
Berenstein, Carlo K; Vanpoucke, Filiep J; Mulder, Jef J S; Mens, Lucas H M
2010-12-01
Tripolar and other electrode configurations that use simultaneous stimulation inside the cochlea have been tested to reduce channel interactions compared to the monopolar stimulation conventionally used in cochlear implant systems. However, these "focused" configurations require increased current levels to achieve sufficient loudness. In this study, we investigate whether highly accurate recordings of the intracochlear electrical field set up by monopolar and tripolar configurations correlate to their effect on loudness. We related the intra-scalar potential distribution to behavioral loudness, by introducing a free parameter (α) which parameterizes the degree to which the potential field peak set up inside the scala tympani is still present at the location of the targeted neural tissue. Loudness balancing was performed on four levels between behavioral threshold and the most comfortable loudness level in a group of 10 experienced Advanced Bionics cochlear implant users. The effect of the amount of focusing on loudness was well explained by α per subject location along the basilar membrane. We found that α was unaffected by presentation level. Moreover, the ratios between the monopolar and tripolar currents, balanced for equal loudness, were approximately the same for all presentation levels. This suggests a linear loudness growth with increasing current level and that the equal peak hypothesis may predict the loudness of threshold as well as at supra-threshold levels. These results suggest that advanced electrical field imaging, complemented with limited psychophysical testing, more specifically at only one presentation level, enables estimation of the loudness growth of complex electrode configurations. Copyright © 2010 Elsevier B.V. All rights reserved.
The Monitoring Of Thunderstorm In Sao Paulo's Urban Areas, Brazil
NASA Astrophysics Data System (ADS)
Gin, R. B.; Pereira, A.; Beneti, C.; Jusevicius, M.; Kawano, M.; Bianchi, R.; Bellodi, M.
2005-12-01
A monitoring of thunderstorm in urban areas occurred in the vicinity of Sao Bernardo do Campo, Sao Paulo from November 2004 to March 2005. Eight thunderstorms were monitored by local electric field, video camera, Brazilian Lightning Location Network (RINDAT) and weather radar. The most of these thunderstorms were associated with the local convection and cold front. Some of these events presented floods in the vicinity of Sao Bernardo and in the Metropolitan Area of Sao Paulo (MASP) being associated with local sea breeze circulation and the heat island effect. The convectives cells exceeding 100km x 100 km of area, actives between 2 and 3 hours. The local electric field identified the electrification stage of thunderstorms, high transients of lightning and total lightning rate of above 10 flashes per minute. About 29.5 thousands of cloud-to-ground lightning flashes were analyzed . From the total set of CG flashes analyzed, about 94 percent were negative strokes and presented average peak current of above 25kA, common for this region. Some lightning images were obtained by video camera and compared with transients of lightning and lightning detection network data. The most of these transients of lightning presented continuing current duration between 100ms and 200ms. A CG lightning occurred on 25th February was visually observed 3.5km from FEI campus, Sao Bernardo do Campo. This lightning presented negative polarity and estimed peak current of above 30kA. A spider was visually observed over FEI Campus at 17th March. No transients of lightning and recording by lightning location network were found.
Simulation of air quality impacts from prescribed fires on an urban area.
Hu, Yongtao; Odman, M Talat; Chang, Michael E; Jackson, William; Lee, Sangil; Edgerton, Eric S; Baumann, Karsten; Russell, Armistead G
2008-05-15
On February 28, 2007, a severe smoke event caused by prescribed forest fires occurred in Atlanta, GA. Later smoke events in the southeastern metropolitan areas of the United States caused by the Georgia-Florida wild forest fires further magnified the significance of forest fire emissions and the benefits of being able to accurately predict such occurrences. By using preburning information, we utilize an operational forecasting system to simulate the potential air quality impacts from two large February 28th fires. Our "forecast" predicts that the scheduled prescribed fires would have resulted in over 1 million Atlanta residents being potentially exposed to fine particle matter (PM2.5) levels of 35 microg m(-3) or higher from 4 p.m. to midnight. The simulated peak 1 h PM2.5 concentration is about 121 microg m(-3). Our study suggests that the current air quality forecasting technology can be a useful tool for helping the management of fire activities to protect public health. With postburning information, our "hindcast" predictions improved significantly on timing and location and slightly on peak values. "Hindcast" simulations also indicated that additional isoprenoid emissions from pine species temporarily triggered by the fire could induce rapid ozone and secondary organic aerosol formation during late winter. Results from this study suggest that fire induced biogenic volatile organic compounds emissions missing from current fire emissions estimate should be included in the future.
Tracking speech comprehension in space and time.
Pulvermüller, Friedemann; Shtyrov, Yury; Ilmoniemi, Risto J; Marslen-Wilson, William D
2006-07-01
A fundamental challenge for the cognitive neuroscience of language is to capture the spatio-temporal patterns of brain activity that underlie critical functional components of the language comprehension process. We combine here psycholinguistic analysis, whole-head magnetoencephalography (MEG), the Mismatch Negativity (MMN) paradigm, and state-of-the-art source localization techniques (Equivalent Current Dipole and L1 Minimum-Norm Current Estimates) to locate the process of spoken word recognition at a specific moment in space and time. The magnetic MMN to words presented as rare "deviant stimuli" in an oddball paradigm among repetitive "standard" speech stimuli, peaked 100-150 ms after the information in the acoustic input, was sufficient for word recognition. The latency with which words were recognized corresponded to that of an MMN source in the left superior temporal cortex. There was a significant correlation (r = 0.7) of latency measures of word recognition in individual study participants with the latency of the activity peak of the superior temporal source. These results demonstrate a correspondence between the behaviorally determined recognition point for spoken words and the cortical activation in left posterior superior temporal areas. Both the MMN calculated in the classic manner, obtained by subtracting standard from deviant stimulus response recorded in the same experiment, and the identity MMN (iMMN), defined as the difference between the neuromagnetic responses to the same stimulus presented as standard and deviant stimulus, showed the same significant correlation with word recognition processes.
DNA content analysis allows discrimination between Trypanosoma cruzi and Trypanosoma rangeli.
Naves, Lucila Langoni; da Silva, Marcos Vinícius; Fajardo, Emanuella Francisco; da Silva, Raíssa Bernardes; De Vito, Fernanda Bernadelli; Rodrigues, Virmondes; Lages-Silva, Eliane; Ramírez, Luis Eduardo; Pedrosa, André Luiz
2017-01-01
Trypanosoma cruzi, a human protozoan parasite, is the causative agent of Chagas disease. Currently the species is divided into six taxonomic groups. The genome of the CL Brener clone has been estimated to be 106.4-110.7 Mb, and DNA content analyses revealed that it is a diploid hybrid clone. Trypanosoma rangeli is a hemoflagellate that has the same reservoirs and vectors as T. cruzi; however, it is non-pathogenic to vertebrate hosts. The haploid genome of T. rangeli was previously estimated to be 24 Mb. The parasitic strains of T. rangeli are divided into KP1(+) and KP1(-). Thus, the objective of this study was to investigate the DNA content in different strains of T. cruzi and T. rangeli by flow cytometry. All T. cruzi and T. rangeli strains yielded cell cycle profiles with clearly identifiable G1-0 (2n) and G2-M (4n) peaks. T. cruzi and T. rangeli genome sizes were estimated using the clone CL Brener and the Leishmania major CC1 as reference cell lines because their genome sequences have been previously determined. The DNA content of T. cruzi strains ranged from 87,41 to 108,16 Mb, and the DNA content of T. rangeli strains ranged from 63,25 Mb to 68,66 Mb. No differences in DNA content were observed between KP1(+) and KP1(-) T. rangeli strains. Cultures containing mixtures of the epimastigote forms of T. cruzi and T. rangeli strains resulted in cell cycle profiles with distinct G1 peaks for strains of each species. These results demonstrate that DNA content analysis by flow cytometry is a reliable technique for discrimination between T. cruzi and T. rangeli isolated from different hosts.
AC losses in (Bi,Pb) 2Sr 2Ca 2Cu 3O x tapes
NASA Astrophysics Data System (ADS)
D'Anna, G.; Indenbom, M. V.; André, M.-O.; Benoit, W.; Grivel, J.-C.; Hensel, B.; Flükiger, R.
1994-05-01
A double peak structure is observed in the AC losses of (Bi,Pb) 2Sr 2Ca 2Cu 3O x silver-sheathed tapes using a torsion-pendulum oscillator. The low-temperature peak is associated to the intragrain flux expulsion, while the high-temperature peak results from a macroscopic current path around the whole sample due to a well-coupled fraction of the grains. The flux pinning by the dislocations forming small-angle grain boundaries is suggested to control the transport current.
Digital processing with single electrons for arbitrary waveform generation of current
NASA Astrophysics Data System (ADS)
Okazaki, Yuma; Nakamura, Shuji; Onomitsu, Koji; Kaneko, Nobu-Hisa
2018-03-01
We demonstrate arbitrary waveform generation of current using a GaAs-based single-electron pump. In our experiment, a digital processing algorithm known as delta-sigma modulation is incorporated into single-electron pumping to generate a density-modulated single-electron stream, by which we demonstrate the generation of arbitrary waveforms of current including sinusoidal, square, and triangular waves with a peak-to-peak amplitude of approximately 10 pA and an output bandwidth ranging from dc to close to 1 MHz. The developed current generator can be used as the precise and calculable current reference required for measurements of current noise in low-temperature experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P.; Liu, G. Z.; Science and Technology on High Power Microwave Laboratory, Northwest Institute of Nuclear Technology, Xi'an 710024
The emission threshold of explosive emission cathodes (EECs) is an important factor for beam quality. It can affect the explosive emission delay time, the plasma expansion process on the cathode surface, and even the current amplitude when the current is not fully space-charge-limited. This paper researches the influence of the emission threshold of an annular EEC on the current waveform in a foilless diode when the current is measured by a Rogowski coil. The particle-in-cell simulation which is performed under some tolerable and necessary simplifications shows that the long explosive emission delay time of high-threshold cathodes may leave an apparentmore » peak of displacement current on the rise edge of the current waveform, and this will occur only when the electron emission starts after this peak. The experimental researches, which are performed under a diode voltage of 1 MV and a repetitive frequency of 20 Hz, demonstrate that the graphite cathode has a lower emission threshold and a longer lifetime than the stainless steel cathode according to the variation of the peak of displacement current on the rise edge of the current waveform.« less
Edge Currents and Stability in DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, D M; Fenstermacher, M E; Finkenthal, D K
2004-12-01
Understanding the stability physics of the H-mode pedestal in tokamak devices requires an accurate measurement of plasma current in the pedestal region with good spatial resolution. Theoretically, the high pressure gradients achieved in the edge of H-mode plasmas should lead to generation of a significant edge current density peak through bootstrap and Pfirsh-Schl{umlt u}ter effects. This edge current is important for the achievement of second stability in the context of coupled magneto hydrodynamic (MHD) modes which are both pressure (ballooning) and current (peeling) driven. Many aspects of edge localized mode (ELM) behavior can be accounted for in terms of anmore » edge current density peak, with the identification of Type 1 ELMs as intermediate-n toroidal mode number MHD modes being a natural feature of this model. The development of a edge localized instabilities in tokamak experiments code (ELITE) based on this model allows one to efficiently calculate the stability and growth of the relevant modes for a broad range of plasma parameters and thus provides a framework for understanding the limits on pedestal height. This however requires an accurate assessment of the edge current. While estimates of j{sub edge} can be made based on specific bootstrap models, their validity may be limited in the edge (gradient scalelengths comparable to orbit size, large changes in collisionality, etc.). Therefore it is highly desirable to have an actual measurement. Such measurements have been made on the DIII-D tokamak using combined polarimetry and spectroscopy of an injected lithium beam. By analyzing one of the Zeeman-split 2S-2P lithium resonance line components, one can obtain direct information on the local magnetic field components. These values allow one to infer details of the edge current density. Because of the negligible Stark mixing of the relevant atomic levels in lithium, this method of determining j(r) is insensitive to the large local electric fields typically found in enhanced confinement (H-mode) edges, and thus avoids an ambiguity common to MSE measurements of B{sub pol}.« less
Edge Currents and Stability in DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, D M; Fenstermacher, M E; Finkenthal, D K
2005-05-05
Understanding the stability physics of the H-mode pedestal in tokamak devices requires an accurate measurement of plasma current in the pedestal region with good spatial resolution. Theoretically, the high pressure gradients achieved in the edge of H-mode plasmas should lead to generation of a significant edge current density peak through bootstrap and Pfirsh-Schlueter effects. This edge current is important for the achievement of second stability in the context of coupled magneto hydrodynamic (MHD) modes which are both pressure (ballooning) and current (peeling) driven [1]. Many aspects of edge localized mode (ELM) behavior can be accounted for in terms of anmore » edge current density peak, with the identification of Type 1 ELMs as intermediate-n toroidal mode number MHD modes being a natural feature of this model [2]. The development of a edge localized instabilities in tokamak experiments code (ELITE) based on this model allows one to efficiently calculate the stability and growth of the relevant modes for a broad range of plasma parameters [3,4] and thus provides a framework for understanding the limits on pedestal height. This however requires an accurate assessment of the edge current. While estimates of j{sub edge} can be made based on specific bootstrap models, their validity may be limited in the edge (gradient scale lengths comparable to orbit size, large changes in collisionality, etc.). Therefore it is highly desirable to have an actual measurement. Such measurements have been made on the DIII-D tokamak using combined polarimetry and spectroscopy of an injected lithium beam. [5,6]. By analyzing one of the Zeeman-split 2S-2P lithium resonance line components, one can obtain direct information on the local magnetic field components. These values allow one to infer details of the edge current density. Because of the negligible Stark mixing of the relevant atomic levels in lithium, this method of determining j(r) is insensitive to the large local electric fields typically found in enhanced confinement (H-mode) edges, and thus avoids an ambiguity common to MSE measurements of B{sub pol}.« less
Methods and apparatus for reducing peak wind turbine loads
Moroz, Emilian Mieczyslaw
2007-02-13
A method for reducing peak loads of wind turbines in a changing wind environment includes measuring or estimating an instantaneous wind speed and direction at the wind turbine and determining a yaw error of the wind turbine relative to the measured instantaneous wind direction. The method further includes comparing the yaw error to a yaw error trigger that has different values at different wind speeds and shutting down the wind turbine when the yaw error exceeds the yaw error trigger corresponding to the measured or estimated instantaneous wind speed.
Laser-photofield emission from needle cathodes for low-emittance electron beams.
Ganter, R; Bakker, R; Gough, C; Leemann, S C; Paraliev, M; Pedrozzi, M; Le Pimpec, F; Schlott, V; Rivkin, L; Wrulich, A
2008-02-15
Illumination of a ZrC needle with short laser pulses (16 ps, 266 nm) while high voltage pulses (-60 kV, 2 ns, 30 Hz) are applied, produces photo-field emitted electron bunches. The electric field is high and varies rapidly over the needle surface so that quantum efficiency (QE) near the apex can be much higher than for a flat photocathode due to the Schottky effect. Up to 150 pC (2.9 A peak current) have been extracted by photo-field emission from a ZrC needle. The effective emitting area has an estimated radius below 50 microm leading to a theoretical intrinsic emittance below 0.05 mm mrad.
Bottom-up determination of air-sea momentum exchange under a major tropical cyclone.
Jarosz, Ewa; Mitchell, Douglas A; Wang, David W; Teague, William J
2007-03-23
As a result of increasing frequency and intensity of tropical cyclones, an accurate forecasting of cyclone evolution and ocean response is becoming even more important to reduce threats to lives and property in coastal regions. To improve predictions, accurate evaluation of the air-sea momentum exchange is required. Using current observations recorded during a major tropical cyclone, we have estimated this momentum transfer from the ocean side of the air-sea interface, and we discuss it in terms of the drag coefficient. For winds between 20 and 48 meters per second, this coefficient initially increases and peaks at winds of about 32 meters per second before decreasing.
Fallout from the Chernobyl nuclear disaster and congenital malformations in Europe.
Hoffmann, W
2001-01-01
Investigators estimate that the population exposure that resulted from the Chernobyl fallout is in the range of natural background radiation for most European countries. Given current radiobiologic knowledge, health effects-if any-would not be measurable with epidemiologic tools. In several independent reports, however, researchers have described isolated peaks in the prevalence of congenital malformations in the cohort conceived immediately after onset of the fallout. The consistency of the time pattern and the specific types of malformation raise concern about their significance. In this study, the author summarizes findings from Turkey, Belarus, Croatia, Finland, Germany, and other countries, and implications for radiation protection and public health issues are discussed.
Liu, Cheng; Li, Shiying; Gu, Yanjuan; Xiong, Huahua; Wong, Wing-Tak; Sun, Lei
2018-05-07
Tumor proteases have been recognized as significant regulators in the tumor microenvironment, but the current strategies for in vivo protease imaging have tended to focus on the development of a probe design rather than the investigation of a novel imaging strategy by leveraging the imaging technique and probe. Herein, it is the first report to investigate the ability of multispectral photoacoustic imaging (PAI) to estimate the distribution of protease cleavage sites inside living tumor tissue by using an activatable photoacoustic (PA) probe. The protease MMP-2 is selected as the target. In this probe, gold nanocages (GNCs) with an absorption peak at ~ 800 nm and fluorescent dye molecules with an absorption peak at ~ 680 nm are conjugated via a specific enzymatic peptide substrate. Upon enzymatic activation by MMP-2, the peptide substrate is cleaved and the chromophores are released. Due to the different retention speeds of large GNCs and small dye molecules, the probe alters its intrinsic absorption profile and produces a distinct change in the PA signal. A multispectral PAI technique that can distinguish different chromophores based on intrinsic PA spectral signatures is applied to estimate the signal composition changes and indicate the cleavage interaction sites. Finally, the multispectral PAI technique with the activatable probe is tested in solution, cultured cells, and a subcutaneous tumor model in vivo. Our experiment in solution with enzyme ± inhibitor, cell culture ± inhibitor, and in vivo tumor model with administration of the developed probe ± inhibitor demonstrated the probe was cleaved by the targeted enzyme. Particularly, the in vivo estimation of the cleavage site distribution was validated with the result of ex vivo immunohistochemistry analysis. This novel synergy of the multispectral PAI technique and the activatable probe is a potential strategy for the distribution estimation of tumor protease activity in vivo.
NASA Astrophysics Data System (ADS)
Scaini, Anna; Hissler, Christophe; Fenicia, Fabrizio; Juilleret, Jérôme; Iffly, Jean François; Pfister, Laurent; Beven, Keith
2018-03-01
Subsurface flow is often recognized as a dominant runoff generation process. However, observing subsurface properties, and understanding how they control flow pathways, remains challenging. This paper investigates how surface slope and bedrock cleavage control subsurface flow pathways in a slate bedrock headwater catchment in Luxembourg, characterised by a double-peak streamflow response. We use a range of experimental techniques, including field observations of soil and bedrock characteristics, and a sprinkling experiment at a site located 40 m upslope from the stream channel. The sprinkling experiment uses Br- as a tracer, which is measured at a well downslope from the plot and at various locations along the stream, together with well and stream hydrometric responses. The sprinkling experiment is used to estimate velocities and celerities, which in turn are used to infer flow pathways. Our results indicate that the single or first peak of double-peak events is rainfall-driven (controlled by rainfall) while the second peak is storage-driven (controlled by storage). The comparison between velocity and celerity estimates suggests a fast flowpath component connecting the hillslope to the stream, but velocity information was too scarce to fully support such a hypothesis. In addition, different estimates of celerities suggest a seasonal influence of both rainfall intensity rate and residual water storage on the celerity responses at the hillslope scale. At the catchment outlet, the estimated of the total mass of Br- recovered in the stream was about 2.5% of the application. Further downstream, the estimate mass of Br- was about 4.0% of the application. This demonstrates that flowpaths do not appear to align with the slope gradient. In contrast, they appear to follow the strike of the bedrock cleavage. Our results have expanded our understanding of the importance of the subsurface, in particular the underlying bedrock systems, and the importance of cleavage orientation, as well as topography, in controlling subsurface flow direction in this catchment.
Paynter, Stuart; Yakob, Laith; Simões, Eric A. F.; Lucero, Marilla G.; Tallo, Veronica; Nohynek, Hanna; Ware, Robert S.; Weinstein, Philip; Williams, Gail; Sly, Peter D.
2014-01-01
We used a mathematical transmission model to estimate when ecological drivers of respiratory syncytial virus (RSV) transmissibility would need to act in order to produce the observed seasonality of RSV in the Philippines. We estimated that a seasonal peak in transmissibility would need to occur approximately 51 days prior to the observed peak in RSV cases (range 49 to 67 days). We then compared this estimated seasonal pattern of transmissibility to the seasonal patterns of possible ecological drivers of transmissibility: rainfall, humidity and temperature patterns, nutritional status, and school holidays. The timing of the seasonal patterns of nutritional status and rainfall were both consistent with the estimated seasonal pattern of transmissibility and these are both plausible drivers of the seasonality of RSV in this setting. PMID:24587222
A hydrodynamic treatment of the tilted cold dark matter cosmological scenario
NASA Technical Reports Server (NTRS)
Cen, Renyue; Ostriker, Jeremiah P.
1993-01-01
A standard hydrodynamic code coupled with a particle-mesh code is used to compute the evolution of a tilted cold dark matter (TCDM) model containing both baryonic matter and dark matter. Six baryonic species are followed, with allowance for both collisional and radiative ionization in every cell. The mean final Zel'dovich-Sunyaev y parameter is estimated to be (5.4 +/- 2.7) x 10 exp -7, below currently attainable observations, with an rms fluctuation of about (6.0 +/- 3.0) x 10 exp -7 on arcmin scales. The rate of galaxy formation peaks at a relatively late epoch (z is about 0.5). In the case of mass function, the smallest objects are stabilized against collapse by thermal energy: the mass-weighted mass spectrum peaks in the vicinity of 10 exp 9.1 solar masses, with a reasonable fit to the Schechter luminosity function if the baryon mass to blue light ratio is about 4. It is shown that a bias factor of 2 required for the model to be consistent with COBE DMR signals is probably a natural outcome in the present multiple component simulations.
Meteoric Magnesium Ions in the Martian Atmosphere
NASA Technical Reports Server (NTRS)
Pesnell, William Dean; Grebowsky, Joseph
1999-01-01
From a thorough modeling of the altitude profile of meteoritic ionization in the Martian atmosphere we deduce that a persistent layer of magnesium ions should exist around an altitude of 70 km. Based on current estimates of the meteoroid mass flux density, a peak ion density of about 10(exp 4) ions/cm is predicted. Allowing for the uncertainties in all of the model parameters, this value is probably within an order of magnitude of the correct density. Of these parameters, the peak density is most sensitive to the meteoroid mass flux density which directly determines the ablated line density into a source function for Mg. Unlike the terrestrial case, where the metallic ion production is dominated by charge-exchange of the deposited neutral Mg with the ambient ions, Mg+ in the Martian atmosphere is produced predominantly by photoionization. The low ultraviolet absorption of the Martian atmosphere makes Mars an excellent laboratory in which to study meteoric ablation. Resonance lines not seen in the spectra of terrestrial meteors may be visible to a surface observatory in the Martian highlands.
Nazim, M; Ameen, Sadia; Seo, Hyung-Kee; Shin, Hyung Shik
2015-06-12
A new and novel organic π-conjugated chromophore (named as RCNR) based on fumaronitrile-core acceptor and terminal alkylated bithiophene was designed, synthesized and utilized as an electron-donor material for the solution-processed fabrication of bulk-heterojunction (BHJ) small molecule organic solar cells (SMOSCs). The synthesized organic chromophore exhibited a broad absorption peak near green region and strong emission peak due to the presence of strong electron-withdrawing nature of two nitrile (-CN) groups of fumaronitrile acceptor. The highest occupied molecular orbital (HOMO) energy level of -5.82 eV and the lowest unoccupied molecular orbital (LUMO) energy level of -3.54 eV were estimated for RCNR due to the strong electron-accepting tendency of -CN groups. The fabricated SMOSC devices with RCNR:PC60BM (1:3, w/w) active layer exhibited the reasonable power conversion efficiency (PCE) of ~2.69% with high short-circuit current density (JSC) of ~9.68 mA/cm(2) and open circuit voltage (VOC) of ~0.79 V.
Nazim, M.; Ameen, Sadia; Seo, Hyung-Kee; Shin, Hyung Shik
2015-01-01
A new and novel organic π-conjugated chromophore (named as RCNR) based on fumaronitrile-core acceptor and terminal alkylated bithiophene was designed, synthesized and utilized as an electron-donor material for the solution-processed fabrication of bulk-heterojunction (BHJ) small molecule organic solar cells (SMOSCs). The synthesized organic chromophore exhibited a broad absorption peak near green region and strong emission peak due to the presence of strong electron-withdrawing nature of two nitrile (–CN) groups of fumaronitrile acceptor. The highest occupied molecular orbital (HOMO) energy level of –5.82 eV and the lowest unoccupied molecular orbital (LUMO) energy level of –3.54 eV were estimated for RCNR due to the strong electron-accepting tendency of –CN groups. The fabricated SMOSC devices with RCNR:PC60BM (1:3, w/w) active layer exhibited the reasonable power conversion efficiency (PCE) of ~2.69% with high short-circuit current density (JSC) of ~9.68 mA/cm2 and open circuit voltage (VOC) of ~0.79 V. PMID:26066557
Return period adjustment for runoff coefficients based on analysis in undeveloped Texas watersheds
Dhakal, Nirajan; Fang, Xing; Asquith, William H.; Cleveland, Theodore G.; Thompson, David B.
2013-01-01
The rational method for peak discharge (Qp) estimation was introduced in the 1880s. The runoff coefficient (C) is a key parameter for the rational method that has an implicit meaning of rate proportionality, and the C has been declared a function of the annual return period by various researchers. Rate-based runoff coefficients as a function of the return period, C(T), were determined for 36 undeveloped watersheds in Texas using peak discharge frequency from previously published regional regression equations and rainfall intensity frequency for return periods T of 2, 5, 10, 25, 50, and 100 years. The C(T) values and return period adjustments C(T)/C(T=10 year) determined in this study are most applicable to undeveloped watersheds. The return period adjustments determined for the Texas watersheds in this study and those extracted from prior studies of non-Texas data exceed values from well-known literature such as design manuals and textbooks. Most importantly, the return period adjustments exceed values currently recognized in Texas Department of Transportation design guidance when T>10 years.
Lenzuni, Paolo
2015-07-01
The purpose of this article is to develop a method for the statistical inference of the maximum peak sound pressure level and of the associated uncertainty. Both quantities are requested by the EU directive 2003/10/EC for a complete and solid assessment of the noise exposure at the workplace. Based on the characteristics of the sound pressure waveform, it is hypothesized that the distribution of the measured peak sound pressure levels follows the extreme value distribution. The maximum peak level is estimated as the largest member of a finite population following this probability distribution. The associated uncertainty is also discussed, taking into account not only the contribution due to the incomplete sampling but also the contribution due to the finite precision of the instrumentation. The largest of the set of measured peak levels underestimates the maximum peak sound pressure level. The underestimate can be as large as 4 dB if the number of measurements is limited to 3-4, which is common practice in occupational noise assessment. The extended uncertainty is also quite large (~2.5 dB), with a weak dependence on the sampling details. Following the procedure outlined in this article, a reliable comparison between the peak sound pressure levels measured in a workplace and the EU directive action limits is possible. Non-compliance can occur even when the largest of the set of measured peak levels is several dB below such limits. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
NASA Astrophysics Data System (ADS)
Schultz, A.; Bonner, L. R., IV
2017-12-01
Current efforts to assess risk to the power grid from geomagnetic disturbances (GMDs) that result in geomagnetically induced currents (GICs) seek to identify potential "hotspots," based on statistical models of GMD storm scenarios and power distribution grounding models that assume that the electrical conductivity of the Earth's crust and mantle varies only with depth. The NSF-supported EarthScope Magnetotelluric (MT) Program operated by Oregon State University has mapped 3-D ground electrical conductivity structure across more than half of the continental US. MT data, the naturally occurring time variations in the Earth's vector electric and magnetic fields at ground level, are used to determine the MT impedance tensor for each site (the ratio of horizontal vector electric and magnetic fields at ground level expressed as a complex-valued frequency domain quantity). The impedance provides information on the 3-D electrical conductivity structure of the Earth's crust and mantle. We demonstrate that use of 3-D ground conductivity information significantly improves the fidelity of GIC predictions over existing 1-D approaches. We project real-time magnetic field data streams from US Geological Survey magnetic observatories into a set of linear filters that employ the impedance data and that generate estimates of ground level electric fields at the locations of MT stations. The resulting ground electric fields are projected to and integrated along the path of power transmission lines. This serves as inputs to power flow models that represent the power transmission grid, yielding a time-varying set of quasi-real-time estimates of reactive power loss at the power transformers that are critical infrastructure for power distribution. We demonstrate that peak reactive power loss and hence peak risk for transformer damage from GICs does not necessarily occur during peak GMD storm times, but rather depends on the time-evolution of the polarization of the GMD's inducing fields and the complex ground (3-D) electric field response, and the resulting alignment of the ground electric fields with the power transmission line paths. This is informing our efforts to provide a set of real-time tools for power grid operators to use in mitigating damage from space weather events.
Momentum flux parasitic to free-energy transfer
Stoltzfus-Dueck, T.; Scott, B.
2017-05-11
An often-neglected portion of the radialmore » $$\\boldsymbol{E}\\times \\boldsymbol{B}$$ drift is shown to drive an outward flux of co-current momentum when free energy is transferred from the electrostatic potential to ion parallel flows. This symmetry breaking is fully nonlinear, not quasilinear, necessitated simply by free-energy balance in parameter regimes for which significant energy is dissipated via ion parallel flows. The resulting rotation peaking is counter-current and has a scaling and order of magnitude that are comparable with experimental observations. Finally, the residual stress becomes inactive when frequencies are much higher than the ion transit frequency, which may explain the observed relation of density peaking and counter-current rotation peaking in the core.« less
NASA Astrophysics Data System (ADS)
Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.
2017-04-01
Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.
NASA Astrophysics Data System (ADS)
Shin, Sunhae; Rok Kim, Kyung
2016-04-01
We propose complement double-peak negative differential resistance (NDR) devices with ultrahigh peak-to-valley current ratio (PVCR) over 106 by combining tunnel diode with conventional CMOS and its compact five-state latch circuit by introducing standard ternary inverter (STI). At the “high”-state of STI, n-type NDR device (tunnel diode with nMOS) has 1st NDR characteristics with 1st peak and valley by band-to-band tunneling (BTBT) and trap-assisted tunneling (TAT), whereas p-type NDR device (tunnel diode with pMOS) has second NDR characteristics from the suppression of diode current by off-state MOSFET. The “intermediate”-state of STI permits double-peak NDR device to operate five-state latch with only four transistors, which has 33% area reduction compared with that of binary inverter and 57% bit-density reduction compared with binary latch.