Tritium as an indicator of ground-water age in Central Wisconsin
Bradbury, Kenneth R.
1991-01-01
In regions where ground water is generally younger than about 30 years, developing the tritium input history of an area for comparison with the current tritium content of ground water allows quantitative estimates of minimum ground-water age. The tritium input history for central Wisconsin has been constructed using precipitation tritium measured at Madison, Wisconsin and elsewhere. Weighted tritium inputs to ground water reached a peak of over 2,000 TU in 1964, and have declined since that time to about 20-30 TU at present. In the Buena Vista basin in central Wisconsin, most ground-water samples contained elevated levels of tritium, and estimated minimum ground-water ages in the basin ranged from less than one year to over 33 years. Ground water in mapped recharge areas was generally younger than ground water in discharge areas, and estimated ground-water ages were consistent with flow system interpretations based on other data. Estimated minimum ground-water ages increased with depth in areas of downward ground-water movement. However, water recharging through thick moraine sediments was older than water in other recharge areas, reflecting slower infiltration through the sandy till of the moraine.
A revised burial dose estimation procedure for optical dating of youngand modern-age sediments
Arnold, L.J.; Roberts, R.G.; Galbraith, R.F.; DeLong, S.B.
2009-01-01
The presence of genuinely zero-age or near-zero-age grains in modern-age and very young samples poses a problem for many existing burial dose estimation procedures used in optical (optically stimulated luminescence, OSL) dating. This difficulty currently necessitates consideration of relatively simplistic and statistically inferior age models. In this study, we investigate the potential for using modified versions of the statistical age models of Galbraith et??al. [Galbraith, R.F., Roberts, R.G., Laslett, G.M., Yoshida, H., Olley, J.M., 1999. Optical dating of single and multiple grains of quartz from Jinmium rock shelter, northern Australia: Part I, experimental design and statistical models. Archaeometry 41, 339-364.] to provide reliable equivalent dose (De) estimates for young and modern-age samples that display negative, zero or near-zero De estimates. For this purpose, we have revised the original versions of the central and minimum age models, which are based on log-transformed De values, so that they can be applied to un-logged De estimates and their associated absolute standard errors. The suitability of these 'un-logged' age models is tested using a series of known-age fluvial samples deposited within two arroyo systems from the American Southwest. The un-logged age models provide accurate burial doses and final OSL ages for roughly three-quarters of the total number of samples considered in this study. Sensitivity tests reveal that the un-logged versions of the central and minimum age models are capable of producing accurate burial dose estimates for modern-age and very young (<350??yr) fluvial samples that contain (i) more than 20% of well-bleached grains in their De distributions, or (ii) smaller sub-populations of well-bleached grains for which the De values are known with high precision. Our results indicate that the original (log-transformed) versions of the central and minimum age models are still preferable for most routine dating applications, since these age models are better suited to the statistical properties of typical single-grain and multi-grain single-aliquot De datasets. However, the unique error properties of modern-age samples, combined with the problems of calculating natural logarithms of negative or zero-Gy De values, mean that the un-logged versions of the central and minimum age models currently offer the most suitable means of deriving accurate burial dose estimates for very young and modern-age samples. ?? 2009 Elsevier Ltd. All rights reserved.
Atmospheric Science Data Center
2013-04-18
article title: Hurricane Ida Cross-Track Winds ... (MISR) instrument on NASA's Terra satellite passed over Hurricane Ida while it was situated between western Cuba and the Yucatan Peninsula. According to the National Hurricane Center, at 15:00 UTC, the hurricane had an estimated minimum central ...
MacAllister, Jack; Sherwood, Jennifer; Galjour, Joshua; Robbins, Sarah; Zhao, Jinkou; Dam, Kim; Grosso, Ashley; Baral, Stefan D
2015-03-01
To identify gaps in epidemiologic and HIV service coverage data for key populations (KP), including men who have sex with men (MSM), female sex workers (FSW), people who inject drugs (PWID), and transgender persons, in 8 West and Central Africa countries: Cameroon, Chad, Cote d'Ivoire, Democratic Republic of Congo, Ghana, Guinea-Bissau, Niger, and Nigeria. A comprehensive search of peer-reviewed literature was conducted using PubMed and MEDLINE. This search was supplemented by an additional search of relevant non-peer-reviewed, or gray, literature. Available data on HIV prevalence, KP size estimates, HIV prevention service targets, and HIV prevention service coverage, including the availability of population-specific minimum packages of services, were included in the review. No data for transgender persons were found. HIV prevalence data and size estimates were more frequently available for FSW, followed by MSM. Only 2 countries (Ghana and Nigeria) had both KP size estimates and HIV prevalence data for PWID. The degree to which HIV prevention service targets were adopted was highly variable across the selected countries, and the collection of relevant HIV prevention service coverage data for those targets that were identified was inconsistent. Population-specific minimum packages of services were identified in 3 countries (Cote d'Ivoire, Ghana, and Nigeria), although only Ghana and Nigeria included services for PWID. Epidemiologic and HIV prevention service data for FSW, MSM, PWID, and transgender persons remain sparse, and these KP are inconsistently accounted for in-service delivery and nationally endorsed minimum packages of HIV services in West and Central Africa. The strengthening of data collection and reporting to consistently include KP and the inclusion of that data in national planning is imperative for effectively addressing the HIV epidemic.
Sakamoto, Ryo; Okada, Tomohisa; Kanagaki, Mitsunori; Yamamoto, Akira; Fushimi, Yasutaka; Kakigi, Takahide; Arakawa, Yoshiki; Takahashi, Jun C; Mikami, Yoshiki; Togashi, Kaori
2015-01-01
Central neurocytoma was initially believed to be benign tumor type, although atypical cases with more aggressive behavior have been reported. Preoperative estimation for proliferating activity of central neurocytoma is one of the most important considerations for determining tumor management. To investigate predictive values of image characteristics and quantitative measurements of minimum apparent diffusion coefficient (ADCmin) and maximum standardized uptake value (SUVmax) for proliferative activity of central neurocytoma measured by MIB-1 labeling index (LI). Twelve cases of central neurocytoma including one recurrence from January 2001 to December 2011 were included. Preoperative scans were conducted in 11, nine, and five patients for computed tomography (CT), diffusion-weighted imaging (DWI), and fluorine-18-fluorodeoxyglucose positron emission tomography (FDG-PET), respectively, and ADCmin and SUVmax of the tumors were measured. Image characteristics were investigated using CT, T2-weighted (T2W) imaging and contrast-enhanced T1-weighted (T1W) imaging, and their differences were examined using the Fisher's exact test between cases with MIB-1 LI below and above 2%, which is recognized as typical and atypical central neurocytoma, respectively. Correlational analysis was conducted for ADCmin and SUVmax with MIB-1 LI. A P value <0.05 was considered significant. Morphological appearances had large variety, and there was no significant correlation with MIB-1 LI except a tendency that strong enhancement was observed in central neurocytomas with higher MIB-1 LI (P = 0.061). High linearity with MIB-1 LI was observed in ADCmin and SUVmax (r = -0.91 and 0.74, respectively), but only ADCmin was statistically significant (P = 0.0006). Central neurocytoma had a wide variety of image appearance, and assessment of proliferative potential was considered difficult only by morphological aspects. ADCmin was recognized as a potential marker for differentiation of atypical central neurocytomas from the typical ones. © The Foundation Acta Radiologica 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoiber, R.E.; Jepsen, A.
The first extensive measurements by remote-sensing correlation spectrometry of the sulfur dioxide emitted by volcanic plumes indicate that on the order of 10/sup +3/ metric tons of sulfur dioxide gas enter the atmosphere daily from Central American volcanoes. Extrapolation gives a minimum estimate of the annual amount of sulfur dioxide emitted from the world's volcanoes of about 10/sup +7/ metric tons.
NASA Technical Reports Server (NTRS)
Shen, Suhung; Leptoukh, Gregory G.
2011-01-01
Surface air temperature (T(sub a)) is a critical variable in the energy and water cycle of the Earth.atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T(sub a) from satellite remotely sensed land surface temperature (T(sub s)) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T(sub a) and MODIS T(sub s). The relationships between the maximum T(sub a) and daytime T(sub s) depend significantly on land cover types, but the minimum T(sub a) and nighttime T(sub s) have little dependence on the land cover types. The largest difference between maximum T(sub a) and daytime T(sub s) appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T(sub a) were estimated from 1 km resolution MODIS T(sub s) under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T(sub a) were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T(sub a) varies from 2.4 C over closed shrublands to 3.2 C over grasslands, and the MAE of the estimated minimum Ta is about 3.0 C.
Impulsive noise suppression in color images based on the geodesic digital paths
NASA Astrophysics Data System (ADS)
Smolka, Bogdan; Cyganek, Boguslaw
2015-02-01
In the paper a novel filtering design based on the concept of exploration of the pixel neighborhood by digital paths is presented. The paths start from the boundary of a filtering window and reach its center. The cost of transitions between adjacent pixels is defined in the hybrid spatial-color space. Then, an optimal path of minimum total cost, leading from pixels of the window's boundary to its center is determined. The cost of an optimal path serves as a degree of similarity of the central pixel to the samples from the local processing window. If a pixel is an outlier, then all the paths starting from the window's boundary will have high costs and the minimum one will also be high. The filter output is calculated as a weighted mean of the central pixel and an estimate constructed using the information on the minimum cost assigned to each image pixel. So, first the costs of optimal paths are used to build a smoothed image and in the second step the minimum cost of the central pixel is utilized for construction of the weights of a soft-switching scheme. The experiments performed on a set of standard color images, revealed that the efficiency of the proposed algorithm is superior to the state-of-the-art filtering techniques in terms of the objective restoration quality measures, especially for high noise contamination ratios. The proposed filter, due to its low computational complexity, can be applied for real time image denoising and also for the enhancement of video streams.
Genepleio software for effective estimation of gene pleiotropy from protein sequences.
Chen, Wenhai; Chen, Dandan; Zhao, Ming; Zou, Yangyun; Zeng, Yanwu; Gu, Xun
2015-01-01
Though pleiotropy, which refers to the phenomenon of a gene affecting multiple traits, has long played a central role in genetics, development, and evolution, estimation of the number of pleiotropy components remains a hard mission to accomplish. In this paper, we report a newly developed software package, Genepleio, to estimate the effective gene pleiotropy from phylogenetic analysis of protein sequences. Since this estimate can be interpreted as the minimum pleiotropy of a gene, it is used to play a role of reference for many empirical pleiotropy measures. This work would facilitate our understanding of how gene pleiotropy affects the pattern of genotype-phenotype map and the consequence of organismal evolution.
USDA-ARS?s Scientific Manuscript database
Accurate estimation of soil organic carbon (SOC) is crucial to efforts to improve soil fertility and stabilize atmospheric CO2 concentrations by sequestering carbon (C) in soils. Soil organic C measurements are, however, often highly variable and management practices can take a long time to produce ...
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
Bathymetry of Lake Manatee, Manatee County, Florida, 2009
Bellino, Jason C.; Pfeiffer, William R.
2010-01-01
Lake Manatee, located in central Manatee County, Florida, is the principal drinking-water source for Manatee and Sarasota Counties. The drainage basin of Lake Manatee encompasses about 120 square miles, and the reservoir covers a surface area of about 1,450 acres at an elevation of 38.8 feet above NAVD 88 or 39.7 feet above NGVD 29. The full pool water-surface elevation is 39.1 feet above NAVD 88 (40.0 feet above NGVD 29), and the estimated minimum usable elevation is 25.1 feet above NAVD 88 (26.0 feet above NGVD 29). The minimum usable elevation is based on the elevation of water intake structures. Manatee County has used the stage/volume relation that was developed from the original survey in the 1960s to estimate the volume of water available for consumption. Concerns about potential changes in storage capacity of the Lake Manatee reservoir, coupled with a recent drought, led to this bathymetry mapping effort.
Systemic Amyloidosis in England: an epidemiological study
Pinney, Jennifer H; Smith, Colette J; Taube, Jessi B; Lachmann, Helen J; Venner, Christopher P; Gibbs, Simon D J; Dungu, Jason; Banypersad, Sanjay M; Wechalekar, Ashutosh D; Whelan, Carol J; Hawkins, Philip N; Gillmore, Julian D
2013-01-01
Epidemiological studies of systemic amyloidosis are scarce and the burden of disease in England has not previously been estimated. In 1999, the National Health Service commissioned the National Amyloidosis Centre (NAC) to provide a national clinical service for all patients with amyloidosis. Data for all individuals referred to the NAC is held on a comprehensive central database, and these were compared with English death certificate data for amyloidosis from 2000 to 2008, obtained from the Office of National Statistics. Amyloidosis was stated on death certificates of 2543 individuals, representing 0·58/1000 recorded deaths. During the same period, 1143 amyloidosis patients followed at the NAC died, 903 (79%) of whom had amyloidosis recorded on their death certificates. The estimated minimum incidence of systemic amyloidosis in the English population in 2008, based on new referrals to the NAC, was 0·4/100 000 population. The incidence peaked at age 60–79 years. Systemic AL amyloidosis was the most common type with an estimated minimum incidence of 0·3/100 000 population. Although there are various limitations to this study, the available data suggest the incidence of systemic amyloidosis in England exceeds 0·8/100 000 of the population. PMID:23480608
Liu, Ying; Geng, Kun; Chu, Yanhao; Xu, Mindi; Zha, Lagabaiyila
2018-03-03
The purpose of this study is to provide a forensic reference data about estimating chronologic age by evaluating the third molar mineralization of Han in central southern China. The mineralization degree of third molars was assessed by Demirjian's classification with modification for 2519 digital orthopantomograms (1190 males, 1329 females; age 8-23 years). The mean ages of the initial mineralization and the crown completion of third molars were around 9.66 and 13.88 years old in males and 9.52 and 14.09 years old in females. The minimum ages of apical closure were around 16 years in both sexes. Twenty-eight at stage C and stage G and 38 and 48 at stage F occurred earlier in males than in females. There was no significant difference between maxillary and mandibular teeth in males and females except that stage C in males. Two formulas were devised to estimate age based on mineralization stages and sexes. In Hunan Province, the person will probably be over age 14, when a third molar reaches the stage G. The results of the study could provide reference for age estimation in forensic cases and clinical dentistry.
Minimum Period of Rotation of Millisecond Pulsars and Pulsar Matter Equations of State
NASA Astrophysics Data System (ADS)
Mikheev, Sergey; Tsvetkov, Victor
2018-02-01
Based on the findings of our previous studies of fast-rotating Newtonian polytropes, we found the relation between the minimum pulsar rotation period, the value of pulsar central density, and the polytropic index. From this relation we come to the conclusion that the value of minimum central density of a pulsar with a peak period is 2.5088 • 1014 g/cm3.
Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio
2017-01-01
The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.
Belkahia, Hanène; Ben Said, Mourad; El Mabrouk, Narjesse; Saidani, Mariem; Cherni, Chayma; Ben Hassen, Mariem; Bouattour, Ali; Messadi, Lilia
2017-09-01
In cattle, anaplasmosis is a tick-borne rickettsial disease caused by Anaplasma marginale, A. centrale, A. phagocytophilum, and A. bovis. To date, no information concerning the seasonal dynamics of single and/or mixed infections by different Anaplasma species in bovines are available in Tunisia. In this work, a total of 1035 blood bovine samples were collected in spring (n=367), summer (n=248), autumn (n=244) and winter (n=176) from five different governorates belonging to three bioclimatic zones from the North of Tunisia. Molecular survey of A. marginale, A. centrale and A. bovis in cattle showed that average prevalence rates were 4.7% (minimum 4.1% in autumn and maximum 5.6% in summer), 7% (minimum 3.9% in winter and maximum 10.7% in autumn) and 4.9% (minimum 2.7% in spring and maximum 7.3% in summer), respectively. A. phagocytophilum was not detected in all investigated cattle. Seasonal variations of Anaplasma spp. infection and co-infection rates in overall and/or according to each bioclimatic area were recorded. Molecular characterization of A. marginale msp4 gene indicated a high sequence homology of revealed strains with A. marginale sequences from African countries. Alignment of 16S rRNA A. centrale sequences showed that Tunisian strains were identical to the vaccine strain from several sub-Saharan African and European countries. The comparison of the 16S rRNA sequences of A. bovis variants showed a perfect homology between Tunisian variants isolated from cattle, goats and sheep. These present data are essential to estimate the risk of bovine anaplasmosis in order to develop integrated control policies against multi-species pathogen communities, infecting humans and different animal species, in the country. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Haering, E. A., Jr.; Burcham, F. W., Jr.
1984-01-01
A simulation study was conducted to optimize minimum time and fuel consumption paths for an F-15 airplane powered by two F100 Engine Model Derivative (EMD) engines. The benefits of using variable stall margin (uptrim) to increase performance were also determined. This study supports the NASA Highly Integrated Digital Electronic Control (HIDEC) program. The basis for this comparison was minimum time and fuel used to reach Mach 2 at 13,716 m (45,000 ft) from the initial conditions of Mach 0.15 at 1524 m (5000 ft). Results were also compared to a pilot's estimated minimum time and fuel trajectory determined from the F-15 flight manual and previous experience. The minimum time trajectory took 15 percent less time than the pilot's estimate for the standard EMD engines, while the minimum fuel trajectory used 1 percent less fuel than the pilot's estimate for the minimum fuel trajectory. The F-15 airplane with EMD engines and uptrim, was 23 percent faster than the pilot's estimate. The minimum fuel used was 5 percent less than the estimate.
NASA Astrophysics Data System (ADS)
Pitcher, Bradley W.; Kent, Adam J. R.; Grunder, Anita L.; Duncan, Robert A.
2017-06-01
The late Neogene Deschutes Formation of central Oregon preserves a remarkable volcanic and sedimentary record of the initial stages of High Cascades activity following an eastward shift in the locus of volcanism at 7.5 Ma. Numerous ignimbrite and tephra-fall units are contained within the formation, and since equivalent deposits are relatively rare for the Quaternary Cascades, the eruptions of the earliest High Cascade volcanoes were likely more explosive than those of the Quaternary arc. In this study, the timing and frequency of eruptions which produced 14 laterally extensive marker ignimbrites within the Deschutes Formation are established using 40Ar/39Ar geochronology. Plagioclase 40Ar/39Ar ages for the lowermost (6.25 ± 0.07 Ma) and uppermost (5.45 ± 0.04 Ma) marker ignimbrites indicate that all major explosive eruptions within the Deschutes Formation occurred within a period of 800 ± 54 k.y. (95% confidence interval). Minimum estimates for the volumes of the 14 ignimbrites, using an ArcGIS-based method, range from 1.0 to 9.4 km3 and have a total volume of 62.5 km3. Taken over the 50 km of arc length, the explosive volcanic production rate of the central Oregon High Cascades during Deschutes Formation time was a minimum of 1.8 km3/m.y./km of arc length. By including estimates of the volumes of tephra-fall components, as well as ignimbrites that may have traveled west, we estimate a total volume range, for these 14 eruptions alone, of 188 to 363 km3 ( 121 to 227 km3 DRE), a rate of 4.7-9.1 km3/m.y./km arc length. This explosive volcanic production rate is much higher than the average Quaternary eruption rates, of all compositions, estimated for the entire Cascade arc (1.5-2.5), Alaska Peninsula segment of the Aleutian arc (0.6-1.0), and the Andean southern volcanic zone (1.1-2.0). We suggest that this atypical explosive pulse may result from the onset of regional extension and migration of the magmatic arc, which had the combined effect of increasing magmatic flux and temporarily enhancing melting of more fusible crust.
Sustainable-yield estimation for the Sparta Aquifer in Union County, Arkansas
Hays, Phillip D.
2000-01-01
Options for utilizing alternative sources of water to alleviate overdraft from the Sparta aquifer and ensure that the aquifer can continue to provide abundant water of excellent quality for the future are being evaluated by water managers in Union County. Sustainable yield is a critical element in identifying and designing viable water supply alternatives. With sustainable yield defined and a knowledge of total water demand in an area, any unmet demand can be calculated. The ground-water flow model of the Sparta aquifer was used to estimate sustainable yield using an iterative approach. The Sparta aquifer is a confined aquifer of regional importance that comprises a sequence of unconsolidated sand units that are contained within the Sparta Sand. Currently, the rate of withdrawal in some areas greatly exceeds the rate of recharge to the aquifer and considerable water-level declines have occurred. Ground-water flow model results indicate that the aquifer cannot continue to meet growing water-use demands indefinitely and that water levels will drop below the top of the primary producing sand unit in Union County (locally termed the El Dorado sand) by 2008 if current water-use trends continue. Declines of that magnitude will initiate dewatering of the El Dorado sand. The sustainable yield of the aquifer was calculated by targeting a specified minimum acceptable water level within Union County and varying Union County pumpage within the model to achieve the target water level. Selection of the minimum target water level for sustainable-yield estimation was an important criterion for the modeling effort. In keeping with the State Critical Ground-Water Area designation criteria and the desire of water managers in Union County to improve aquifer conditions and bring the area out of the Critical Ground-Water Area designation, the approximate altitude of the top of the Sparta Sand in central Union County was used as the minimum water level target for estimation of sustainable yield in the county. A specific category of sustainable yield? stabilization yield, reflecting the amount of water that the aquifer can provide while maintaining current water levels? also was determined and provides information for short-term management. The top of the primary producing sand unit (the El Dorado sand) was used as the minimum water-level target for estimating stabilization yield in the county because current minimum water levels in central Union County are near the top of the El Dorado sand. Model results show that withdrawals from the Sparta aquifer in Union County must be reduced to 28 percent of 1997 values to achieve sustainable yield and maintain water levels at the top of the Sparta Sand if future pumpage outside of Union County is assumed to increase at the rate observed from 1985-1997. Results of the simulation define a very large current unmet demand and represent a substantial reduction in the county?s current dependence upon the aquifer. If future pumpage outside of Union County is assumed to increase at double the rate observed from 1985-1997, withdrawals from the Sparta aquifer in Union County must be reduced to 25 percent of 1997 values to achieve sustainable yield. Withdrawals from the Sparta aquifer in Union County must be reduced to about 88 to 91 percent (depending on pumpage growth outside of the county) of 1997 values to stabilize water levels at the top of the El Dorado sand. This result shows that 1997 rate of withdrawal in the county is considerably greater than the rate needed to halt the rapid decline in water levels.
Vercoutere, T.L.; Mullins, H.T.; McDougall, K.; Thompson, J.B.
1987-01-01
Distribution, abundance, and diversity of terrigenous, authigenous, and biogenous material provide evidence of the effect of bottom currents and oxygen minimum zone (OMZ) on continental slope sedimentation offshore central California. Three major OMZ facies are identified, along the upper and lower edges of OMZ and one at its core.-from Authors
Convex central configurations for the n-body problem
NASA Astrophysics Data System (ADS)
Xia, Zhihong
We give a simple proof of a classical result of MacMillan and Bartky (Trans. Amer. Math. Soc. 34 (1932) 838) which states that, for any four positive masses and any assigned order, there is a convex planar central configuration. Moreover, we show that the central configurations we find correspond to local minima of the potential function with fixed moment of inertia. This allows us to show that there are at least six local minimum central configurations for the planar four-body problem. We also show that for any assigned order of five masses, there is at least one convex spatial central configuration of local minimum type. Our method also applies to some other cases.
Sousa, F A; da Silva, J A
2000-04-01
The purpose of this study was to verify the relationship between professional prestige scaled through estimations and the professional prestige scaled through estimation of the number of minimum salaries attributed to professions in function of their prestige in society. Results showed: 1--the relationship between the estimation of magnitudes and the estimation of the number of minimum salaries attributed to the professions in function of their prestige is characterized by a function of potence with an exponent lower than 1,0,2--the orders of degrees of prestige of the professions resultant from different experiments involving different samples of subjects are highly concordant (W = 0.85; p < 0.001), considering the modality used as a number (estimation of magnitudes of minimum salaries).
Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates
Curtis, Caroline A.; Bradley, Bethany A.
2016-01-01
Background Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance. Methods We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the ‘plant characteristics’ information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF) and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN), and tested whether ΔCN was influenced by growth form or range size. Results Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001). The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation. Conclusion Our results show that distribution data are consistently broader than USDA PLANTS experts’ knowledge and likely provide more robust estimates of climatic tolerance, especially for widespread forbs and grasses. These findings suggest that widely available expert-based climatic tolerance estimates underrepresent species’ fundamental niche and likely fail to capture the realized niche. PMID:27870859
Will Elephants Soon Disappear from West African Savannahs?
Bouché, Philippe; Douglas-Hamilton, Iain; Wittemyer, George; Nianogo, Aimé J.; Doucet, Jean-Louis; Lejeune, Philippe; Vermeulen, Cédric
2011-01-01
Precipitous declines in Africa's native fauna and flora are recognized, but few comprehensive records of these changes have been compiled. Here, we present population trends for African elephants in the 6,213,000 km2 Sudano-Sahelian range of West and Central Africa assessed through the analysis of aerial and ground surveys conducted over the past 4 decades. These surveys are focused on the best protected areas in the region, and therefore represent the best case scenario for the northern savanna elephants. A minimum of 7,745 elephants currently inhabit the entire region, representing a minimum decline of 50% from estimates four decades ago for these protected areas. Most of the historic range is now devoid of elephants and, therefore, was not surveyed. Of the 23 surveyed elephant populations, half are estimated to number less than 200 individuals. Historically, most populations numbering less than 200 individuals in the region were extirpated within a few decades. Declines differed by region, with Central African populations experiencing much higher declines (−76%) than those in West Africa (−33%). As a result, elephants in West Africa now account for 86% of the total surveyed. Range wide, two refuge zones retain elephants, one in West and the other in Central Africa. These zones are separated by a large distance (∼900 km) of high density human land use, suggesting connectivity between the regions is permanently cut. Within each zone, however, sporadic contacts between populations remain. Retaining such connectivity should be a high priority for conservation of elephants in this region. Specific corridors designed to reduce the isolation of the surveyed populations are proposed. The strong commitment of governments, effective law enforcement to control the illegal ivory trade and the involvement of local communities and private partners are all critical to securing the future of elephants inhabiting Africa's northern savannas. PMID:21731620
Age, extent and carbon storage of the central Congo Basin peatland complex.
Dargie, Greta C; Lewis, Simon L; Lawson, Ian T; Mitchard, Edward T A; Page, Susan E; Bocko, Yannick E; Ifo, Suspense A
2017-02-02
Peatlands are carbon-rich ecosystems that cover just three per cent of Earth's land surface, but store one-third of soil carbon. Peat soils are formed by the build-up of partially decomposed organic matter under waterlogged anoxic conditions. Most peat is found in cool climatic regions where unimpeded decomposition is slower, but deposits are also found under some tropical swamp forests. Here we present field measurements from one of the world's most extensive regions of swamp forest, the Cuvette Centrale depression in the central Congo Basin. We find extensive peat deposits beneath the swamp forest vegetation (peat defined as material with an organic matter content of at least 65 per cent to a depth of at least 0.3 metres). Radiocarbon dates indicate that peat began accumulating from about 10,600 years ago, coincident with the onset of more humid conditions in central Africa at the beginning of the Holocene. The peatlands occupy large interfluvial basins, and seem to be largely rain-fed and ombrotrophic-like (of low nutrient status) systems. Although the peat layer is relatively shallow (with a maximum depth of 5.9 metres and a median depth of 2.0 metres), by combining in situ and remotely sensed data, we estimate the area of peat to be approximately 145,500 square kilometres (95 per cent confidence interval of 131,900-156,400 square kilometres), making the Cuvette Centrale the most extensive peatland complex in the tropics. This area is more than five times the maximum possible area reported for the Congo Basin in a recent synthesis of pantropical peat extent. We estimate that the peatlands store approximately 30.6 petagrams (30.6 × 10 15 grams) of carbon belowground (95 per cent confidence interval of 6.3-46.8 petagrams of carbon)-a quantity that is similar to the above-ground carbon stocks of the tropical forests of the entire Congo Basin. Our result for the Cuvette Centrale increases the best estimate of global tropical peatland carbon stocks by 36 per cent, to 104.7 petagrams of carbon (minimum estimate of 69.6 petagrams of carbon; maximum estimate of 129.8 petagrams of carbon). This stored carbon is vulnerable to land-use change and any future reduction in precipitation.
Carroll, Carlos; McRae, Brad H; Brookes, Allen
2012-02-01
Centrality metrics evaluate paths between all possible pairwise combinations of sites on a landscape to rank the contribution of each site to facilitating ecological flows across the network of sites. Computational advances now allow application of centrality metrics to landscapes represented as continuous gradients of habitat quality. This avoids the binary classification of landscapes into patch and matrix required by patch-based graph analyses of connectivity. It also avoids the focus on delineating paths between individual pairs of core areas characteristic of most corridor- or linkage-mapping methods of connectivity analysis. Conservation of regional habitat connectivity has the potential to facilitate recovery of the gray wolf (Canis lupus), a species currently recolonizing portions of its historic range in the western United States. We applied 3 contrasting linkage-mapping methods (shortest path, current flow, and minimum-cost-maximum-flow) to spatial data representing wolf habitat to analyze connectivity between wolf populations in central Idaho and Yellowstone National Park (Wyoming). We then applied 3 analogous betweenness centrality metrics to analyze connectivity of wolf habitat throughout the northwestern United States and southwestern Canada to determine where it might be possible to facilitate range expansion and interpopulation dispersal. We developed software to facilitate application of centrality metrics. Shortest-path betweenness centrality identified a minimal network of linkages analogous to those identified by least-cost-path corridor mapping. Current flow and minimum-cost-maximum-flow betweenness centrality identified diffuse networks that included alternative linkages, which will allow greater flexibility in planning. Minimum-cost-maximum-flow betweenness centrality, by integrating both land cost and habitat capacity, allows connectivity to be considered within planning processes that seek to maximize species protection at minimum cost. Centrality analysis is relevant to conservation and landscape genetics at a range of spatial extents, but it may be most broadly applicable within single- and multispecies planning efforts to conserve regional habitat connectivity. ©2011 Society for Conservation Biology.
Evaluation of solar thermal power plants using economic and performance simulations
NASA Technical Reports Server (NTRS)
El-Gabawali, N.
1980-01-01
An energy cost analysis is presented for central receiver power plants with thermal storage and point focusing power plants with electrical storage. The present approach is based on optimizing the size of the plant to give the minimum energy cost (in mills/kWe hr) of an annual plant energy production. The optimization is done by considering the trade-off between the collector field size and the storage capacity for a given engine size. The energy cost is determined by the plant cost and performance. The performance is estimated by simulating the behavior of the plant under typical weather conditions. Plant capital and operational costs are estimated based on the size and performance of different components. This methodology is translated into computer programs for automatic and consistent evaluation.
Wartmann, Flurina M; Purves, Ross S; van Schaik, Carel P
2010-04-01
Quantification of the spatial needs of individuals and populations is vitally important for management and conservation. Geographic information systems (GIS) have recently become important analytical tools in wildlife biology, improving our ability to understand animal movement patterns, especially when very large data sets are collected. This study aims at combining the field of GIS with primatology to model and analyse space-use patterns of wild orang-utans. Home ranges of female orang-utans in the Tuanan Mawas forest reserve in Central Kalimantan, Indonesia were modelled with kernel density estimation methods. Kernel results were compared with minimum convex polygon estimates, and were found to perform better, because they were less sensitive to sample size and produced more reliable estimates. Furthermore, daily travel paths were calculated from 970 complete follow days. Annual ranges for the resident females were approximately 200 ha and remained stable over several years; total home range size was estimated to be 275 ha. On average, each female shared a third of her home range with each neighbouring female. Orang-utan females in Tuanan built their night nest on average 414 m away from the morning nest, whereas average daily travel path length was 777 m. A significant effect of fruit availability on day path length was found. Sexually active females covered longer distances per day and may also temporarily expand their ranges.
Pattanayak, Sujata; Mohanty, U C; Osuri, Krishna K
2012-01-01
The present study is carried out to investigate the performance of different cumulus convection, planetary boundary layer, land surface processes, and microphysics parameterization schemes in the simulation of a very severe cyclonic storm (VSCS) Nargis (2008), developed in the central Bay of Bengal on 27 April 2008. For this purpose, the nonhydrostatic mesoscale model (NMM) dynamic core of weather research and forecasting (WRF) system is used. Model-simulated track positions and intensity in terms of minimum central mean sea level pressure (MSLP), maximum surface wind (10 m), and precipitation are verified with observations as provided by the India Meteorological Department (IMD) and Tropical Rainfall Measurement Mission (TRMM). The estimated optimum combination is reinvestigated with six different initial conditions of the same case to have better conclusion on the performance of WRF-NMM. A few more diagnostic fields like vertical velocity, vorticity, and heat fluxes are also evaluated. The results indicate that cumulus convection play an important role in the movement of the cyclone, and PBL has a crucial role in the intensification of the storm. The combination of Simplified Arakawa Schubert (SAS) convection, Yonsei University (YSU) PBL, NMM land surface, and Ferrier microphysics parameterization schemes in WRF-NMM give better track and intensity forecast with minimum vector displacement error.
J. N. Kochenderfer; G. W. Wendel; H. Clay Smith
1984-01-01
A "minimum-standard" forest truck road that provides efficient and environmentally acceptable access for several forest activities is described. Cost data are presented for eight of these roads constructed in the central Appalachians. The average cost per mile excluding gravel was $8,119. The range was $5,048 to $14,424. Soil loss was measured from several...
Constrained signal reconstruction from wavelet transform coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1991-12-31
A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less
NASA Astrophysics Data System (ADS)
Wyss, B. M.; Wyss, M.
2007-12-01
We estimate that the city of Rangoon and adjacent provinces (Rangoon, Rakhine, Ayeryarwady, Bago) represent an earthquake risk similar in severity to that of Istanbul and the Marmara Sea region. After the M9.3 Sumatra earthquake of December 2004 that ruptured to a point north of the Andaman Islands, the likelihood of additional ruptures in the direction of Myanmar and within Myanmar is increased. This assumption is especially plausible since M8.2 and M7.9 earthquakes in September 2007 extended the 2005 ruptures to the south. Given the dense population of the aforementioned provinces, and the fact that historically earthquakes of M7.5 class have occurred there (in 1858, 1895 and three in 1930), it would not be surprising, if similar sized earthquakes would occur in the coming decades. Considering that we predicted the extent of human losses in the M7.6 Kashmir earthquake of October 2005 approximately correctly six month before it occurred, it seems reasonable to attempt to estimate losses in future large to great earthquakes in central Myanmar and along its coast of the Bay of Bengal. We have calculated the expected number of fatalities for two classes of events: (1) M8 ruptures offshore (between the Andaman Islands and the Myanmar coast, and along Myanmar's coast of the Bay of Bengal. (2) M7.5 repeats of the historic earthquakes that occurred in the aforementioned years. These calculations are only order of magnitude estimates because all necessary input parameters are poorly known. The population numbers, the condition of the building stock, the regional attenuation law, the local site amplification and of course the parameters of future earthquakes can only be estimated within wide ranges. For this reason, we give minimum and maximum estimates, both within approximate error limits. We conclude that the M8 earthquakes located offshore are expected to be less harmful than the M7.5 events on land: For M8 events offshore, the minimum number of fatalities is estimated as 700 ± 200 and the maximum is estimated as 13,000 ± 6,000. For repeats of the historic M7.5 or similar earthquakes, the minimum is 4,000 ± 2,000 and the maximum is 63,000 ± 27,000. An exception is a repeat of the M7.5 earthquake of 1895 beneath the capital Rangoon that is estimated to have a population of about 4.7 million. In the case of a repeat of the 1895 event, a minimum of 100,000 and a maximum of 1 106 fatalities would have to be expected. The number of injured can in all cases be assumed to equal about double the number of fatalities. Although it is not very likely that the 1895 event would be repeated in the same location, it is clear that any medium to large earthquake in the vicinity of Rangoon (at a distance similar to the M7.2 earthquake of May 1930) could cause a major disaster with more than 10,000 fatalities. In spite of the uncertainties in these estimates, it is clear that the capital of Myanmar, and the provinces surrounding it, will likely experience major earthquake disasters in the future and the probability that these could occur during the next decades is increased. We conclude that major efforts of mitigation, using earthquake engineering techniques, and preparation for seismological early-warning capabilities should be undertaken in and near Rangoon, as well as in other cities with more than 100,000 inhabitants (e.g., Phatein, Bago and Henzada).
Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.
2009-01-01
With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water transmission. Seven sites with differing geologic contexts were selected following review of the capacitively coupled resistivity data collected in 2004. A reevaluation of these sites using the mean, minimum-unadjusted, and minimum-adjusted methods was performed to compare the different approaches for estimating leakage potential. Five of the seven sites contained underlying confining units, for which the minimum-unadjusted and minimum-adjusted methods accounted for the confining-unit effect. Estimates of overall leakage potential were lower for the minimum-unadjusted and minimum-adjusted methods than those estimated by the mean method. For most sites, the local heterogeneity adjustment procedure of the minimum-adjusted method resulted in slightly larger overall leakage-potential estimates. In contrast to the mean method, the two minimum-based methods allowed the least permeable areas to control the overall vertical permeability of the subsurface. The minimum-adjusted method refined leakage-potential estimation by additionally including local lithologic heterogeneity effects.
Cosmic Ray Hits in the Central Nervous System at Solar Maximum
NASA Technical Reports Server (NTRS)
Curtis, S. B.; Vazquez, M. E.; Wilson, J. W.; Kim, M.-H. Y.
1997-01-01
It has been suggested that a manned mission to Mars be launched at solar maximum rather than at solar minimum to minimize the radiation exposure to galactic cosmic rays. It is true that the number of hits from highly ionizing particles to critical regions in the brain will be less at solar maximum, and it is of some interest to estimate how much less. We present here calculations for several sites within the brain from iron ions (z = 26) and from particles with charge, z, greater than or equal to 15. The same shielding configurations and sites in the brain used in an earlier paper for solar minimum are employed so that direct comparison of results between the two solar activity conditions can be made. A simple pressure-vessel wall and an equipment room onboard a spacecraft are chosen as shielding examples. In the equipment room, typical results for the thalamus (100 mm2 area) are that the probability of any given cell nucleus being hit decreases from 10 percent at solar minimum to 6 percent at solar maximum for particles with z greater than or equal to 15 and from 2.3 percent to 1.3 percent for iron ions. We conclude that this modest decrease in hit frequency (less than a factor of two) is not a compelling reason to avoid solar minimum for a manned mission to Mars.
12 CFR Appendix M1 to Part 1026 - Repayment Disclosures
Code of Federal Regulations, 2012 CFR
2012-01-01
... terms of a cardholder's account that will expire in a fixed period of time, as set forth by the card... estimates. (1) Minimum payment formulas. When calculating the minimum payment repayment estimate, card... calculate the minimum payment amount for special purchases, such as a “club plan purchase.” Also, assume...
12 CFR Appendix M1 to Part 1026 - Repayment Disclosures
Code of Federal Regulations, 2013 CFR
2013-01-01
... terms of a cardholder's account that will expire in a fixed period of time, as set forth by the card... estimates. (1) Minimum payment formulas. When calculating the minimum payment repayment estimate, card... calculate the minimum payment amount for special purchases, such as a “club plan purchase.” Also, assume...
Estimating missing daily temperature extremes in Jaffna, Sri Lanka
NASA Astrophysics Data System (ADS)
Thevakaran, A.; Sonnadara, D. U. J.
2018-04-01
The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.
A financial network perspective of financial institutions' systemic risk contributions
NASA Astrophysics Data System (ADS)
Huang, Wei-Qiang; Zhuang, Xin-Tian; Yao, Shuang; Uryasev, Stan
2016-08-01
This study considers the effects of the financial institutions' local topology structure in the financial network on their systemic risk contribution using data from the Chinese stock market. We first measure the systemic risk contribution with the Conditional Value-at-Risk (CoVaR) which is estimated by applying dynamic conditional correlation multivariate GARCH model (DCC-MVGARCH). Financial networks are constructed from dynamic conditional correlations (DCC) with graph filtering method of minimum spanning trees (MSTs). Then we investigate dynamics of systemic risk contributions of financial institution. Also we study dynamics of financial institution's local topology structure in the financial network. Finally, we analyze the quantitative relationships between the local topology structure and systemic risk contribution with panel data regression analysis. We find that financial institutions with greater node strength, larger node betweenness centrality, larger node closeness centrality and larger node clustering coefficient tend to be associated with larger systemic risk contributions.
Model of human dynamic orientation. Ph.D. Thesis; [associated with vestibular stimuli
NASA Technical Reports Server (NTRS)
Ormsby, C. C.
1974-01-01
The dynamics associated with the perception of orientation were modelled for near-threshold and suprathreshold vestibular stimuli. A model of the information available at the peripheral sensors which was consistent with available neurophysiologic data was developed and served as the basis for the models of the perceptual responses. The central processor was assumed to utilize the information from the peripheral sensors in an optimal (minimum mean square error) manner to produce the perceptual estimates of dynamic orientation. This assumption, coupled with the models of sensory information, determined the form of the model for the central processor. The problem of integrating information from the semi-circular canals and the otoliths to predict the perceptual response to motions which stimulated both organs was studied. A model was developed which was shown to be useful in predicting the perceptual response to multi-sensory stimuli.
NASA Astrophysics Data System (ADS)
Schnick, M.; Füssel, U.; Hertel, M.; Spille-Kohoff, A.; Murphy, A. B.
2010-01-01
A computational model of the argon arc plasma in gas-metal arc welding (GMAW) that includes the influence of metal vapour from the electrode is presented. The occurrence of a central minimum in the radial distributions of temperature and current density is demonstrated. This is in agreement with some recent measurements of arc temperatures in GMAW, but contradicts other measurements and also the predictions of previous models, which do not take metal vapour into account. It is shown that the central minimum is a consequence of the strong radiative emission from the metal vapour. Other effects of the metal vapour, such as the flux of relatively cold vapour from the electrode and the increased electrical conductivity, are found to be less significant. The different effects of metal vapour in gas-tungsten arc welding and GMAW are explained.
Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai
2004-10-01
Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.
Walling, Bendangtola; Chaudhary, Shushobhit; Dhanya, C T; Kumar, Arun
2017-05-01
Environmental flows (Eflow, hereafter) are the flows to be maintained in the river for its healthy functioning and the sustenance and protection of aquatic ecosystems. Estimation of Eflow in any river stretch demands consideration of various factors such as flow regime, ecosystem, and health of river. However, most of the Eflow estimation studies have neglected the water quality factor. This study urges the need to consider water quality criterion in the estimation of Eflow and proposes a framework for estimating Eflow incorporating water quality variations under present and hypothetical future scenarios of climate change and pollution load. The proposed framework is applied on the polluted stretch of Yamuna River passing through Delhi, India. Required Eflow at various locations along the stretch are determined by considering possible variations in future water quantity and quality. Eflow values satisfying minimum quality requirements for different river water usage classes (classes A, B, C, and D as specified by the Central Pollution Control Board, India) are found to be between 700 and 800 m 3 /s. The estimated Eflow values may aid policymakers to derive upstream storage-release policies or effluent restrictions. Generalized nature of this framework will help its implementation on any river systems.
Two Surface Temperature Retrieval Methods Compared Over Agricultural Lands
NASA Technical Reports Server (NTRS)
French, Andrew N.; Schmugge, Thomas J.; Jacob, Frederic; Ogawa, Kenta; Houser, Paul R. (Technical Monitor)
2002-01-01
Accurate, spatially distributed surface temperatures are required for modeling evapotranspiration (ET) over agricultural fields under wide ranging conditions, including stressed and unstressed vegetation. Modeling approaches that use surface temperature observations, however, have the burden of estimating surface emissivities. Emissivity estimation, the subject of much recent research, is facilitated by observations in multiple thermal infrared bands. But it is nevertheless a difficult task. Using observations from a multiband thermal sensor, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), estimated surface emissivities and temperatures are retrieved in two different ways: the temperature emissivity separation approach (TES) and the normalized emissivity approach (NEM). Both rely upon empirical relationships, but the assumed relationships are different. TES relies upon a relationship between the minimum spectral emissivity and the range of observed emissivities. NEM relies upon an assumption that at least one thermal band has a pre-determined emissivity (close to 1.0). The benefits and consequences of each approach will be demonstrated for two different landscapes: one in central Oklahoma, USA and another in southern New Mexico.
A Simple Criterion to Estimate Performance of Pulse Jet Mixed Vessels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pease, Leonard F.; Bamberger, Judith A.; Mahoney, Lenna A.
Pulse jet mixed process vessels comprise a key element of the U.S. Department of Energy’s strategy to process millions of gallons of legacy nuclear waste slurries. Slurry suctioned into a pulse jet mixer (PJM) tube at the end of one pulse is pneumatically driven from the PJM toward the bottom of the vessel at the beginning of the next pulse, forming a jet. The jet front traverses the distance from nozzle outlet to the bottom of the vessel and spreads out radially. Varying numbers of PJMs are typically arranged in a ring configuration within the vessel at a selected radiusmore » and operated concurrently. Centrally directed radial flows from neighboring jets collide to create a central upwell that elevates the solids in the center of the vessel when the PJM tubes expel their contents. An essential goal of PJM operation is to elevate solids to the liquid surface to minimize stratification. Solids stratification may adversely affect throughput of the waste processing plant. Unacceptably high slurry densities at the base of the vessel may plug the pipeline through which the slurry exits the vessel. Additionally, chemical reactions required for processing may not achieve complete conversion. To avoid these conditions, a means of predicting the elevation to which the solids rise in the central upwell that can be used during vessel design remains essential. In this paper we present a simple criterion to evaluate the extent of solids elevation achieved by a turbulent upwell jet. The criterion asserts that at any location in the central upwell the local velocity must be in excess of a cutoff velocity to remain turbulent. We find that local velocities in excess of 0.6 m/s are necessary for turbulent jet flow through both Newtonian and yield stress slurries. By coupling this criterion with the free jet velocity equation relating the local velocity to elevation in the central upwell, we estimate the elevation at which turbulence fails, and consequently the elevation at which the upwell fails to further lift the slurry. Comparing this elevation to the vessel fill level predicts whether the jet flow will achieve the full vertical extent of the vessel at the center. This simple local-velocity criterion determines a minimum PJM nozzle velocity at which the full vertical extent of the central upwell in PJM vessels will be turbulent. The criterion determines a minimum because flow in regions peripheral to the central upwelling jet may not be turbulent, even when the center of the vessel in the upwell is turbulent, if the jet pulse duration is too short. The local-velocity criterion ensures only that there is sufficient wherewithal for the turbulent jet flow to drive solids to the surface in the center of the vessel in the central upwell.« less
Yaslioglu, Erkan; Simsek, Ercan; Kilic, Ilker
2007-04-15
In the study, 10 different dairy cattle barns with natural ventilation system were investigated in terms of structural aspects. VENTGRAPH software package was used to estimate minimum ventilation requirements for three different outdoor design temperatures (-3, 0 and 1.7 degrees C). Variation in indoor temperatures was also determined according to the above-mentioned conditions. In the investigated dairy cattle barns, on condition that minimum ventilation requirement to be achieved for -3, 0 and 1.7 degrees C outdoor design temperature and 70, 80% Indoor Relative Humidity (IRH), estimated indoor temperature were ranged from 2.2 to 12.2 degrees C for 70% IRH, 4.3 to 15.0 degrees C for 80% IRH. Barn type, outdoor design temperature and indoor relative humidity significantly (p < 0.01) affect the indoor temperature. The highest ventilation requirement was calculated for straw yard (13879 m3 h(-1)) while the lowest was estimated for tie-stall (6169.20 m3 h(-1)). Estimated minimum ventilation requirements per animal were significantly (p < 0.01) different according to the barn types. Effect of outdoor esign temperatures on minimum ventilation requirements and minimum ventilation requirements per animal was found to be significant (p < 0.05, p < 0.01). Estimated indoor temperatures were in thermoneutral zone (-2 to 20 degrees C). Therefore, one can be said that use of naturally ventilated cold dairy barns in the region will not lead to problems associated with animal comfort in winter.
Pattanayak, Sujata; Mohanty, U. C.; Osuri, Krishna K.
2012-01-01
The present study is carried out to investigate the performance of different cumulus convection, planetary boundary layer, land surface processes, and microphysics parameterization schemes in the simulation of a very severe cyclonic storm (VSCS) Nargis (2008), developed in the central Bay of Bengal on 27 April 2008. For this purpose, the nonhydrostatic mesoscale model (NMM) dynamic core of weather research and forecasting (WRF) system is used. Model-simulated track positions and intensity in terms of minimum central mean sea level pressure (MSLP), maximum surface wind (10 m), and precipitation are verified with observations as provided by the India Meteorological Department (IMD) and Tropical Rainfall Measurement Mission (TRMM). The estimated optimum combination is reinvestigated with six different initial conditions of the same case to have better conclusion on the performance of WRF-NMM. A few more diagnostic fields like vertical velocity, vorticity, and heat fluxes are also evaluated. The results indicate that cumulus convection play an important role in the movement of the cyclone, and PBL has a crucial role in the intensification of the storm. The combination of Simplified Arakawa Schubert (SAS) convection, Yonsei University (YSU) PBL, NMM land surface, and Ferrier microphysics parameterization schemes in WRF-NMM give better track and intensity forecast with minimum vector displacement error. PMID:22701366
Annual Estimated Minimum School Program of Utah School Districts, 1984-85.
ERIC Educational Resources Information Center
Utah State Office of Education, Salt Lake City. School Finance and Business Section.
This bulletin presents both the statistical and financial data of the Estimated Annual State-Supported Minimum School Program for the 40 school districts of the State of Utah for the 1984-85 school year. It is published for the benefit of those interested in research into the minimum school programs of the various Utah school districts. A brief…
Strauch, Kellan R.; Linard, Joshua I.
2009-01-01
The U.S. Geological Survey, in cooperation with the Upper Elkhorn, Lower Elkhorn, Upper Loup, Lower Loup, Middle Niobrara, Lower Niobrara, Lewis and Clark, and Lower Platte North Natural Resources Districts, used the Soil and Water Assessment Tool to simulate streamflow and estimate percolation in north-central Nebraska to aid development of long-term strategies for management of hydrologically connected ground and surface water. Although groundwater models adequately simulate subsurface hydrologic processes, they often are not designed to simulate the hydrologically complex processes occurring at or near the land surface. The use of watershed models such as the Soil and Water Assessment Tool, which are designed specifically to simulate surface and near-subsurface processes, can provide helpful insight into the effects of surface-water hydrology on the groundwater system. The Soil and Water Assessment Tool was calibrated for five stream basins in the Elkhorn-Loup Groundwater Model study area in north-central Nebraska to obtain spatially variable estimates of percolation. Six watershed models were calibrated to recorded streamflow in each subbasin by modifying the adjustment parameters. The calibrated parameter sets were then used to simulate a validation period; the validation period was half of the total streamflow period of record with a minimum requirement of 10 years. If the statistical and water-balance results for the validation period were similar to those for the calibration period, a model was considered satisfactory. Statistical measures of each watershed model's performance were variable. These objective measures included the Nash-Sutcliffe measure of efficiency, the ratio of the root-mean-square error to the standard deviation of the measured data, and an estimate of bias. The model met performance criteria for the bias statistic, but failed to meet statistical adequacy criteria for the other two performance measures when evaluated at a monthly time step. A primary cause of the poor model validation results was the inability of the model to reproduce the sustained base flow and streamflow response to precipitation that was observed in the Sand Hills region. The watershed models also were evaluated based on how well they conformed to the annual mass balance (precipitation equals the sum of evapotranspiration, streamflow/runoff, and deep percolation). The model was able to adequately simulate annual values of evapotranspiration, runoff, and precipitation in comparison to reported values, which indicates the model may provide reasonable estimates of annual percolation. Mean annual percolation estimated by the model as basin averages varied within the study area from a maximum of 12.9 inches in the Loup River Basin to a minimum of 1.5 inches in the Shell Creek Basin. Percolation also varied within the studied basins; basin headwaters tended to have greater percolation rates than downstream areas. This variance in percolation rates was mainly was because of the predominance of sandy, highly permeable soils in the upstream areas of the modeled basins.
Elastohydrodynamic lubrication of point contacts. Ph.D. Thesis - Leeds Univ.
NASA Technical Reports Server (NTRS)
Hamrock, B. J.
1976-01-01
A procedure for the numerical solution of the complete, isothermal, elastohydrodynamic lubrication problem for point contacts is given. This procedure calls for the simultaneous solution of the elasticity and Reynolds equations. By using this theory the influence of the ellipticity parameter and the dimensionless speed, load, and material parameters on the minimum and central film thicknesses was investigated. Thirty-four different cases were used in obtaining the fully flooded minimum- and central-film-thickness formulas. Lubricant starvation was also studied. From the results it was possible to express the minimum film thickness for a starved condition in terms of the minimum film thickness for a fully flooded condition, the speed parameter, and the inlet distance. Fifteen additional cases plus three fully flooded cases were used in obtaining this formula. Contour plots of pressure and film thickness in and around the contact have been presented for both fully flooded and starved lubrication conditions.
A Contrastive Study of Determiner Usage in EST Research Articles
ERIC Educational Resources Information Center
Master, Peter
1993-01-01
The determiners in English include three categories: predeterminers, central determiners, and postdeterminers. The focus of the present study is the central determiners because they comprise the largest group and because a minimum of one central determiner is required in the generation of any noun phrase. Furthermore, the central determiners have…
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
Software Development Cost Estimation Executive Summary
NASA Technical Reports Server (NTRS)
Hihn, Jairus M.; Menzies, Tim
2006-01-01
Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.
Time trends in minimum mortality temperatures in Castile-La Mancha (Central Spain): 1975-2003
NASA Astrophysics Data System (ADS)
Miron, Isidro J.; Criado-Alvarez, Juan José; Diaz, Julio; Linares, Cristina; Mayoral, Sheila; Montero, Juan Carlos
2008-03-01
The relationship between air temperature and human mortality is described as non-linear, with mortality tending to rise in response to increasingly hot or cold ambient temperatures from a given minimum mortality or optimal comfort temperature, which varies from some areas to others according to their climatic and socio-demographic characteristics. Changes in these characteristics within any specific region could modify this relationship. This study sought to examine the time trend in the maximum temperature of minimum organic-cause mortality in Castile-La Mancha, from 1975 to 2003. The analysis was performed by using daily series of maximum temperatures and organic-cause mortality rates grouped into three decades (1975-1984, 1985-1994, 1995-2003) to compare confidence intervals ( p < 0.05) obtained by estimating the 10-yearly mortality rates corresponding to the maximum temperatures of minimum mortality calculated for each decade. Temporal variations in the effects of cold and heat on mortality were ascertained by means of ARIMA models (Box-Jenkins) and cross-correlation functions (CCF) at seven lags. We observed a significant decrease in comfort temperature (from 34.2°C to 27.8°C) between the first two decades in the Province of Toledo, along with a growing number of significant lags in the summer CFF (1, 3 and 5, respectively). The fall in comfort temperature is attributable to the increase in the effects of heat on mortality, due, in all likelihood, to the percentage increase in the elderly population.
da Silva, Cleyton Martins; da Silva, Luane Lima; Corrêa, Sergio Machado; Arbilla, Graciela
2016-12-01
Volatile organic compounds (VOCs) play a central role in atmospheric chemistry. In this work, the kinetic and mechanistic reactivities of VOCs are analyzed, and the contribution of the organic compounds emitted by anthropogenic and natural sources is estimated. VOCs react with hydroxyl radicals and other photochemical oxidants, such as ozone and nitrate radicals, which cause the conversion of NO to NO 2 in various potential reaction paths, including photolysis, to form oxygen atoms, which generate ozone. The kinetic reactivity was evaluated based on the reaction coefficients for hydroxyl radicals with VOCs. The mechanistic reactivity was estimated using a detailed mechanism and the incremental reactivity scale that Carter proposed. Different scenarios were proposed and discussed, and a minimum set of compounds, which may describe the tropospheric reactivity in the studied area, was determined. The role of isoprene was analyzed in terms of its contribution to ozone formation.
Ionospheric responses during equinox and solstice periods over Turkey
NASA Astrophysics Data System (ADS)
Karatay, Secil; Cinar, Ali; Arikan, Feza
2017-11-01
Ionospheric electron density is the determining variable for investigation of the spatial and temporal variations in the ionosphere. Total Electron Content (TEC) is the integral of the electron density along a ray path that indicates the total variability through the ionosphere. Global Positioning System (GPS) recordings can be utilized to estimate the TEC, thus GPS proves itself as a useful tool in monitoring the total variability of electron distribution within the ionosphere. This study focuses on the analysis of the variations of ionosphere over Turkey that can be grouped into anomalies during equinox and solstice periods using TEC estimates obtained by a regional GPS network. It is observed that noon time depletions in TEC distributions predominantly occur in winter for minimum Sun Spots Numbers (SSN) in the central regions of Turkey which also exhibit high variability due to midlatitude winter anomaly. TEC values and ionospheric variations at solstice periods demonstrate significant enhancements compared to those at equinox periods.
Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J
2017-01-01
This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.
NASA Astrophysics Data System (ADS)
Maurer, E. P.; Stewart, I. T.; Sundstrom, W.; Bacon, C. M.
2016-12-01
In addition to periodic long-term drought, much of Central America experiences a rainy season with two peaks separated by a dry period of weeks to over a month in duration, termed the mid-summer drought (MSD). Food and water security for smallholder farmers in the region hinge on accommodating this phenomenon, anticipating its arrival and estimating its duration. Model output from 1980 through the late 21st century projects changes in precipitation amount, variability, and timing, with potential to affect regional food production. Using surveys of farmer experiences in conjunction with gridded daily precipitation for a historic period on multiple scales, and with projections through the 21st century, we characterize the MSD across much of Central America using four measures: onset date, duration, intensity, and minimum, and test for significant changes. Our findings indicate that the most significant changes are for the duration, which, by the end of the century, is projected to increase by an average of over a week, and the MSD minimum precipitation, which is projected to decline by an average of over 26%, with statistically significant changes for most of Nicaragua, Honduras, El Salvador, and Guatemala (assuming a higher emissions pathway through the 21st century). These changes toward a longer and drier MSD have important implications for food and water security for vulnerable communities through the region. We find that for the four metrics the changes in interannual variability are small compared to historical variability, and are generally statistically insignificant. New farmer survey results are compared to findings from our climate analysis for the historic period, are used to interpret what MSD characteristics are of greatest interest locally, and are used for the development of adaptation strategies.
46 CFR 62.25-20 - Instrumentation, alarms, and centralized stations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Instrumentation, alarms, and centralized stations. 62.25... Instrumentation, alarms, and centralized stations. (a) General. Minimum instrumentation and alarms required for specific types of automated vital systems are listed in Table 62.35-50. (b) Instrumentation Location. (1...
46 CFR 62.25-20 - Instrumentation, alarms, and centralized stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Instrumentation, alarms, and centralized stations. 62.25... Instrumentation, alarms, and centralized stations. (a) General. Minimum instrumentation and alarms required for specific types of automated vital systems are listed in Table 62.35-50. (b) Instrumentation Location. (1...
46 CFR 62.25-20 - Instrumentation, alarms, and centralized stations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Instrumentation, alarms, and centralized stations. 62.25... Instrumentation, alarms, and centralized stations. (a) General. Minimum instrumentation and alarms required for specific types of automated vital systems are listed in Table 62.35-50. (b) Instrumentation Location. (1...
46 CFR 62.25-20 - Instrumentation, alarms, and centralized stations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Instrumentation, alarms, and centralized stations. 62.25... Instrumentation, alarms, and centralized stations. (a) General. Minimum instrumentation and alarms required for specific types of automated vital systems are listed in Table 62.35-50. (b) Instrumentation Location. (1...
Chadsuthi, Sudarat; Iamsirithaworn, Sopon; Triampo, Wannapong; Modchang, Charin
2015-01-01
Influenza is a worldwide respiratory infectious disease that easily spreads from one person to another. Previous research has found that the influenza transmission process is often associated with climate variables. In this study, we used autocorrelation and partial autocorrelation plots to determine the appropriate autoregressive integrated moving average (ARIMA) model for influenza transmission in the central and southern regions of Thailand. The relationships between reported influenza cases and the climate data, such as the amount of rainfall, average temperature, average maximum relative humidity, average minimum relative humidity, and average relative humidity, were evaluated using cross-correlation function. Based on the available data of suspected influenza cases and climate variables, the most appropriate ARIMA(X) model for each region was obtained. We found that the average temperature correlated with influenza cases in both central and southern regions, but average minimum relative humidity played an important role only in the southern region. The ARIMAX model that includes the average temperature with a 4-month lag and the minimum relative humidity with a 2-month lag is the appropriate model for the central region, whereas including the minimum relative humidity with a 4-month lag results in the best model for the southern region.
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
30 CFR 1206.151 - Definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... any production month, a contract must satisfy this definition for that month, as well as when the... lease production to a central accumulation and/or treatment point on the lease, unit or communitized... the lessee's production and to market that production. Minimum royalty means that minimum amount of...
30 CFR 1206.151 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... interests regarding that contract. To be considered arm's length for any production month, a contract must.... Gathering means the movement of lease production to a central accumulation and/or treatment point on the... the lessee's production and to market that production. Minimum royalty means that minimum amount of...
30 CFR 1206.151 - Definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... interests regarding that contract. To be considered arm's length for any production month, a contract must.... Gathering means the movement of lease production to a central accumulation and/or treatment point on the... the lessee's production and to market that production. Minimum royalty means that minimum amount of...
Code of Federal Regulations, 2010 CFR
2010-07-01
... interests regarding that contract. To be considered arm's length for any production month, a contract must.... Gathering means the movement of lease production to a central accumulation and/or treatment point on the... the lessee's production and to market that production. Minimum royalty means that minimum amount of...
Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values
2016-12-01
UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square error (MMSE) estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem. 3 Introduction Minimum mean‐ square error (MMSE) estimation is applied to target imaging with synthetic aperture
Quadroni, Silvia; Crosa, Giuseppe; Gentili, Gaetano; Espa, Paolo
2017-12-31
The present work focuses on evaluating the ecological effects of hydropower-induced streamflow alteration within four catchments in the central Italian Alps. Downstream from the water diversions, minimum flows are released as an environmental protection measure, ranging approximately from 5 to 10% of the mean annual natural flow estimated at the intake section. Benthic macroinvertebrates as well as daily averaged streamflow were monitored for five years at twenty regulated stream reaches, and possible relationships between benthos-based stream quality metrics and environmental variables were investigated. Despite the non-negligible inter-site differences in basic streamflow metrics, benthic macroinvertebrate communities were generally dominated by few highly resilient taxa. The highest level of diversity was detected at sites where upstream minimum flow exceedance is higher and further anthropogenic pressures (other than hydropower) are lower. However, according to the current Italian normative index, the ecological quality was good/high on average at all of the investigated reaches, thus complying the Water Framework Directive standards. Copyright © 2017 Elsevier B.V. All rights reserved.
Burgess, George H.; Bruce, Barry D.; Cailliet, Gregor M.; Goldman, Kenneth J.; Grubbs, R. Dean; Lowe, Christopher G.; MacNeil, M. Aaron; Mollet, Henry F.; Weng, Kevin C.; O'Sullivan, John B.
2014-01-01
White sharks are highly migratory and segregate by sex, age and size. Unlike marine mammals, they neither surface to breathe nor frequent haul-out sites, hindering generation of abundance data required to estimate population size. A recent tag-recapture study used photographic identifications of white sharks at two aggregation sites to estimate abundance in “central California” at 219 mature and sub-adult individuals. They concluded this represented approximately one-half of the total abundance of mature and sub-adult sharks in the entire eastern North Pacific Ocean (ENP). This low estimate generated great concern within the conservation community, prompting petitions for governmental endangered species designations. We critically examine that study and find violations of model assumptions that, when considered in total, lead to population underestimates. We also use a Bayesian mixture model to demonstrate that the inclusion of transient sharks, characteristic of white shark aggregation sites, would substantially increase abundance estimates for the adults and sub-adults in the surveyed sub-population. Using a dataset obtained from the same sampling locations and widely accepted demographic methodology, our analysis indicates a minimum all-life stages population size of >2000 individuals in the California subpopulation is required to account for the number and size range of individual sharks observed at the two sampled sites. Even accounting for methodological and conceptual biases, an extrapolation of these data to estimate the white shark population size throughout the ENP is inappropriate. The true ENP white shark population size is likely several-fold greater as both our study and the original published estimate exclude non-aggregating sharks and those that independently aggregate at other important ENP sites. Accurately estimating the central California and ENP white shark population size requires methodologies that account for biases introduced by sampling a limited number of sites and that account for all life history stages across the species' range of habitats. PMID:24932483
Burgess, George H; Bruce, Barry D; Cailliet, Gregor M; Goldman, Kenneth J; Grubbs, R Dean; Lowe, Christopher G; MacNeil, M Aaron; Mollet, Henry F; Weng, Kevin C; O'Sullivan, John B
2014-01-01
White sharks are highly migratory and segregate by sex, age and size. Unlike marine mammals, they neither surface to breathe nor frequent haul-out sites, hindering generation of abundance data required to estimate population size. A recent tag-recapture study used photographic identifications of white sharks at two aggregation sites to estimate abundance in "central California" at 219 mature and sub-adult individuals. They concluded this represented approximately one-half of the total abundance of mature and sub-adult sharks in the entire eastern North Pacific Ocean (ENP). This low estimate generated great concern within the conservation community, prompting petitions for governmental endangered species designations. We critically examine that study and find violations of model assumptions that, when considered in total, lead to population underestimates. We also use a Bayesian mixture model to demonstrate that the inclusion of transient sharks, characteristic of white shark aggregation sites, would substantially increase abundance estimates for the adults and sub-adults in the surveyed sub-population. Using a dataset obtained from the same sampling locations and widely accepted demographic methodology, our analysis indicates a minimum all-life stages population size of >2000 individuals in the California subpopulation is required to account for the number and size range of individual sharks observed at the two sampled sites. Even accounting for methodological and conceptual biases, an extrapolation of these data to estimate the white shark population size throughout the ENP is inappropriate. The true ENP white shark population size is likely several-fold greater as both our study and the original published estimate exclude non-aggregating sharks and those that independently aggregate at other important ENP sites. Accurately estimating the central California and ENP white shark population size requires methodologies that account for biases introduced by sampling a limited number of sites and that account for all life history stages across the species' range of habitats.
NASA Technical Reports Server (NTRS)
Crosson, William L.; Duchon, Claude E.; Raghavan, Ravikumar; Goodman, Steven J.
1996-01-01
Precipitation estimates from radar systems are a crucial component of many hydrometeorological applications, from flash flood forecasting to regional water budget studies. For analyses on large spatial scales and long timescales, it is frequently necessary to use composite reflectivities from a network of radar systems. Such composite products are useful for regional or national studies, but introduce a set of difficulties not encountered when using single radars. For instance, each contributing radar has its own calibration and scanning characteristics, but radar identification may not be retained in the compositing procedure. As a result, range effects on signal return cannot be taken into account. This paper assesses the accuracy with which composite radar imagery can be used to estimate precipitation in the convective environment of Florida during the summer of 1991. Results using Z = 30OR(sup 1.4) (WSR-88D default Z-R relationship) are compared with those obtained using the probability matching method (PMM). Rainfall derived from the power law Z-R was found to he highly biased (+90%-l10%) compared to rain gauge measurements for various temporal and spatial integrations. Application of a 36.5-dBZ reflectivity threshold (determined via the PMM) was found to improve the performance of the power law Z-R, reducing the biases substantially to 20%-33%. Correlations between precipitation estimates obtained with either Z-R relationship and mean gauge values are much higher for areal averages than for point locations. Precipitation estimates from the PMM are an improvement over those obtained using the power law in that biases and root-mean-square errors are much lower. The minimum timescale for application of the PMM with the composite radar dataset was found to be several days for area-average precipitation. The minimum spatial scale is harder to quantify, although it is concluded that it is less than 350 sq km. Implications relevant to the WSR-88D system are discussed.
Zhang, Yu-Xiu; Jin, Xin; Zhang, Kai-Jun; Sun, Wei-Dong; Liu, Jian-Ming; Zhou, Xiao-Yao; Yan, Li-Long
2018-01-17
The Triassic eclogite-bearing central Qiangtang metamorphic belt (CQMB) in the northern Tibetan Plateau has been debated whether it is a metamorphic core complex underthrust from the Jinsha Paleo-Tethys or an in-situ Shuanghu suture. The CQMB is thus a key issue to elucidate the crustal architecture of the northern Tibetan Plateau, the tectonics of the eastern Tethys, and the petrogenesis of Cenozoic high-K magmatism. We here report the newly discovered Baqing eclogite along the eastern extension of the CQMB near the Baqing town, central Tibet. These eclogites are characterized by the garnet + omphacite + rutile + phengite + quartz assemblages. Primary eclogite-facies metamorphic pressure-temperature estimates yield consistent minimum pressure of 25 ± 1 kbar at 730 ± 60 °C. U-Pb dating on zircons that contain inclusions (garnet + omphacite + rutile + phengite) gave eclogite-facies metamorphic ages of 223 Ma. The geochemical continental crustal signature and the presence of Paleozoic cores in the zircons indicate that the Baqing eclogite formed by continental subduction and marks an eastward-younging anticlockwise West-East Qiangtang collision along the Shuanghu suture from the Middle to Late Triassic.
The use of Leptodyctium riparium (Hedw.) Warnst in the estimation of minimum postmortem interval.
Lancia, Massimo; Conforti, Federica; Aleffi, Michele; Caccianiga, Marco; Bacci, Mauro; Rossi, Riccardo
2013-01-01
The estimation of the postmortem interval (PMI) is still one of the most challenging issues in forensic investigations, especially in cases in which advanced transformative phenomena have taken place. The dating of skeletal remains is even more difficult and sometimes only a rough determination of the PMI is possible. Recent studies suggest that plant analysis can provide a reliable estimation for skeletal remains dating, when traditional techniques are not applicable. Forensic Botany is a relatively recent discipline that includes many subdisciplines such as Palynology, Anatomy, Dendrochronology, Limnology, Systematic, Ecology, and Molecular Biology. In a recent study, Cardoso et al. (Int J Legal Med 2010;124:451) used botanical evidence for the first time to establish the PMI of human skeletal remains found in a forested area of northern Portugal from the growth rate of mosses and shrub roots. The present paper deals with a case in which the study of the growth rate of the bryophyte Leptodyctium riparium (Hedw.) Warnst, was used in estimating the PMI of some human skeletal remains that were found in a wooded area near Perugia, in Central Italy. © 2012 American Academy of Forensic Sciences.
Green, W. Reed; Galloway, Joel M.; Richards, Joseph M.; Wesolowski, Edwin A.
2003-01-01
Outflow from Table Rock Lake and other White River reservoirs support a cold-water trout fishery of substantial economic yield in south-central Missouri and north-central Arkansas. The Missouri Department of Conservation has requested an increase in existing minimum flows through the Table Rock Lake Dam from the U.S. Army Corps of Engineers to increase the quality of fishable waters downstream in Lake Taneycomo. Information is needed to assess the effect of increased minimum flows on temperature and dissolved- oxygen concentrations of reservoir water and the outflow. A two-dimensional, laterally averaged, hydrodynamic, temperature, and dissolved-oxygen model, CE-QUAL-W2, was developed and calibrated for Table Rock Lake, located in Missouri, north of the Arkansas-Missouri State line. The model simulates water-surface elevation, heat transport, and dissolved-oxygen dynamics. The model was developed to assess the effects of proposed increases in minimum flow from about 4.4 cubic meters per second (the existing minimum flow) to 11.3 cubic meters per second (the increased minimum flow). Simulations included assessing the effect of (1) increased minimum flows and (2) increased minimum flows with increased water-surface elevations in Table Rock Lake, on outflow temperatures and dissolved-oxygen concentrations. In both minimum flow scenarios, water temperature appeared to stay the same or increase slightly (less than 0.37 ?C) and dissolved oxygen appeared to decrease slightly (less than 0.78 mg/L) in the outflow during the thermal stratification season. However, differences between the minimum flow scenarios for water temperature and dissolved- oxygen concentration and the calibrated model were similar to the differences between measured and simulated water-column profile values.
Brennan, Alan; Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S
2014-09-30
To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Modelling study using the Sheffield Alcohol Policy Model version 2.5. England 2014-15. Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45 p, and 50 p per unit (7.9 g/10 mL) of pure alcohol. Changes in mean consumption in terms of units of alcohol, drinkers' expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45 p minimum unit price. Below cost selling is estimated to reduce harmful drinkers' mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45 p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health-saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45 p minimum unit price is estimated to save 624 deaths and 23,700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40 p and 50 p per unit, is estimated to have an approximately 40-50 times greater effect. © Brennan et al 2014.
Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S
2014-01-01
Objective To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Design Modelling study using the Sheffield Alcohol Policy Model version 2.5. Setting England 2014-15. Population Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Interventions Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45p, and 50p per unit (7.9 g/10 mL) of pure alcohol. Main outcome measures Changes in mean consumption in terms of units of alcohol, drinkers’ expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. Results The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45p minimum unit price. Below cost selling is estimated to reduce harmful drinkers’ mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health—saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45p minimum unit price is estimated to save 624 deaths and 23 700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. Conclusions The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40p and 50p per unit, is estimated to have an approximately 40-50 times greater effect. PMID:25270743
Prevalence of autosomal dominant polycystic kidney disease in the European Union.
Willey, Cynthia J; Blais, Jaime D; Hall, Anthony K; Krasa, Holly B; Makin, Andrew J; Czerwiec, Frank S
2017-08-01
Autosomal dominant polycystic kidney disease (ADPKD) is a leading cause of end-stage renal disease, but estimates of its prevalence vary by >10-fold. The objective of this study was to examine the public health impact of ADPKD in the European Union (EU) by estimating minimum prevalence (point prevalence of known cases) and screening prevalence (minimum prevalence plus cases expected after population-based screening). A review of the epidemiology literature from January 1980 to February 2015 identified population-based studies that met criteria for methodological quality. These examined large German and British populations, providing direct estimates of minimum prevalence and screening prevalence. In a second approach, patients from the 2012 European Renal Association‒European Dialysis and Transplant Association (ERA-EDTA) Registry and literature-based inflation factors that adjust for disease severity and screening yield were used to estimate prevalence across 19 EU countries (N = 407 million). Population-based studies yielded minimum prevalences of 2.41 and 3.89/10 000, respectively, and corresponding estimates of screening prevalences of 3.3 and 4.6/10 000. A close correspondence existed between estimates in countries where both direct and registry-derived methods were compared, which supports the validity of the registry-based approach. Using the registry-derived method, the minimum prevalence was 3.29/10 000 (95% confidence interval 3.27-3.30), and if ADPKD screening was implemented in all countries, the expected prevalence was 3.96/10 000 (3.94-3.98). ERA-EDTA-based prevalence estimates and application of a uniform definition of prevalence to population-based studies consistently indicate that the ADPKD point prevalence is <5/10 000, the threshold for rare disease in the EU. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA.
Climate influence on Baltic cod, sprat, and herring stock-recruitment relationships
NASA Astrophysics Data System (ADS)
Margonski, Piotr; Hansson, Sture; Tomczak, Maciej T.; Grzebielec, Ryszard
2010-10-01
A wide range of possible recruitment drivers were tested for key exploited fish species in the Baltic Sea Regional Advisory Council (RAC) area: Eastern Baltic Cod, Central Baltic Herring, Gulf of Riga Herring, and sprat. For each of the stocks, two hypotheses were tested: (i) recruitment is significantly related to spawning stock biomass, climatic forcing, and feeding conditions and (ii) by acknowledging these drivers, management decisions can be improved. Climate impact expressed by climatic indices or changes in water temperature was included in all the final models. Recruitment of the herring stock appeared to be influenced by different factors: the spawning stock biomass, winter Baltic Sea Index prior to spawning, and potentially the November-December sea surface temperature during the winter after spawning were important to Gulf of Riga Herring, while the final models for Central Baltic Herring included spawning stock biomass and August sea surface temperature. Recruitment of sprat appeared to be influenced by July-August temperature, but was independent of the spawning biomass when SSB > 200,000 tons. Recruitment of Eastern Baltic Cod was significantly related to spawning stock biomass, the winter North Atlantic Oscillation index, and the reproductive volume in the Gotland Basin in May. All the models including extrinsic factors significantly improved prediction ability as compared to traditional models, which account for impacts of the spawning stock biomass alone. Based on the final models the minimum spawning stock biomass to derive the associated minimum recruitment under average environmental conditions was calculated for each stock. Using uncertainty analyses, the spawning stock biomass required to produce associated minimum recruitment was presented with different probabilities considering the influence of the extrinsic drivers. This tool allows for recruitment to be predicted with a required probability, that is, higher than the average 50% estimated from the models. Further, this approach considers unfavorable environmental conditions which mean that a higher spawning stock biomass is needed to maintain recruitment at a required level.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Minimum viable populations: Is there a 'magic number' for conservation practitioners?
Curtis H. Flather; Gregory D. Hayward; Steven R. Beissinger; Philip A. Stephens
2011-01-01
Establishing species conservation priorities and recovery goals is often enhanced by extinction risk estimates. The need to set goals, even in data-deficient situations, has prompted researchers to ask whether general guidelines could replace individual estimates of extinction risk. To inform conservation policy, recent studies have revived the concept of the minimum...
Analysis of the sensitivity of soils to the leaching of agricultural pesticides in Ohio
Schalk, C.W.
1998-01-01
Pesticides have not been found frequently in the ground waters of Ohio even though large amounts of agricultural pesticides are applied to fields in Ohio every year. State regulators, including representatives from Ohio Environmental Protection Agency and Departments of Agriculture, Health, and Natural Resources, are striving to limit the presence of pesticides in ground water at a minimum. A proposed pesticide management plan for the State aims at protecting Ohio's ground water by assessing pesticide-leaching potential using geographic information system (GIS) technology and invoking a monitoring plan that targets aquifers deemed most likely to be vulnerable to pesticide leaching. The U.S. Geological Survey, in cooperation with Ohio Department of Agriculture, assessed the sensitivity of mapped soil units in Ohio to pesticide leaching. A soils data base (STATSGO) compiled by U.S. Department of Agriculture was used iteratively to estimate soil units as being of high to low sensitivity on the basis of soil permeability, clay content, and organic-matter content. Although this analysis did not target aquifers directly, the results can be used as a first estimate of areas most likely to be subject to pesticide contamination from normal agricultural practices. High-sensitivity soil units were found in lakefront areas and former lakefront beach ridges, buried valleys in several river basins, and parts of central and south- central Ohio. Medium-high-sensitivity soil units were found in other river basins, along Lake Erie in north-central Ohio, and in many of the upland areas of the Muskingum River Basin. Low-sensitivity map units dominated the northwestern quadrant of Ohio.
Minimum Wage Effects on Educational Enrollments in New Zealand
ERIC Educational Resources Information Center
Pacheco, Gail A.; Cruickshank, Amy A.
2007-01-01
This paper empirically examines the impact of minimum wages on educational enrollments in New Zealand. A significant reform to the youth minimum wage since 2000 has resulted in some age groups undergoing a 91% rise in their real minimum wage over the last 10 years. Three panel least squares multivariate models are estimated from a national sample…
Employment Effects of Minimum and Subminimum Wages. Recent Evidence.
ERIC Educational Resources Information Center
Neumark, David
Using a specially constructed panel data set on state minimum wage laws and labor market conditions, Neumark and Wascher (1992) presented evidence that countered the claim that minimum wages could be raised with no cost to employment. They concluded that estimates indicating that minimum wages reduced employment on the order of 1-2 percent for a…
Does the Minimum Wage Affect Welfare Caseloads?
ERIC Educational Resources Information Center
Page, Marianne E.; Spetz, Joanne; Millar, Jane
2005-01-01
Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…
Minimum area requirements for an at-risk butterfly based on movement and demography.
Brown, Leone M; Crone, Elizabeth E
2016-02-01
Determining the minimum area required to sustain populations has a long history in theoretical and conservation biology. Correlative approaches are often used to estimate minimum area requirements (MARs) based on relationships between area and the population size required for persistence or between species' traits and distribution patterns across landscapes. Mechanistic approaches to estimating MAR facilitate prediction across space and time but are few. We used a mechanistic MAR model to determine the critical minimum patch size (CMP) for the Baltimore checkerspot butterfly (Euphydryas phaeton), a locally abundant species in decline along its southern range, and sister to several federally listed species. Our CMP is based on principles of diffusion, where individuals in smaller patches encounter edges and leave with higher probability than those in larger patches, potentially before reproducing. We estimated a CMP for the Baltimore checkerspot of 0.7-1.5 ha, in accordance with trait-based MAR estimates. The diffusion rate on which we based this CMP was broadly similar when estimated at the landscape scale (comparing flight path vs. capture-mark-recapture data), and the estimated population growth rate was consistent with observed site trends. Our mechanistic approach to estimating MAR is appropriate for species whose movement follows a correlated random walk and may be useful where landscape-scale distributions are difficult to assess, but demographic and movement data are obtainable from a single site or the literature. Just as simple estimates of lambda are often used to assess population viability, the principles of diffusion and CMP could provide a starting place for estimating MAR for conservation. © 2015 Society for Conservation Biology.
Lee, Kyungmin; Chung, Heeyoung; Park, Youngsuk
2014-01-01
Purpose To determine if short term effects of intravitreal anti-vascular endothelial growth factor or steroid injection are correlated with fluid turbidity, as detected by spectral domain optical coherence tomography (SD-OCT) in diabetic macular edema (DME) patients. Methods A total of 583 medical records were reviewed and 104 cases were enrolled. Sixty eyes received a single intravitreal bevacizumab injection (IVB) on the first attack of DME and 44 eyes received triamcinolone acetonide treatment (IVTA). Intraretinal fluid turbidity in DME patients was estimated with initialintravitreal SD-OCT and analyzed with color histograms from a Photoshop program. Central macular thickness and visual acuity using a logarithm from the minimum angle of resolution chart, were assessed at the initial period and 2 months after injections. Results Visual acuity and central macular thickness improved after injections in both groups. In the IVB group, visual acuity and central macular thickness changed less as the intraretinal fluid became more turbid. In the IVTA group, visual acuity underwent less change while central macular thickness had a greater reduction (r = -0.675, p = 0.001) as the intraretinal fluid was more turbid. Conclusions IVB and IVTA injections were effective in reducing central macular thickness and improving visual acuity in DME patients. Further, fluid turbidity, which was detected by SD-OCT may be one of the indexes that highlight the influence of the steroid-dependent pathogenetic mechanism. PMID:25120338
Lee, Kyungmin; Chung, Heeyoung; Park, Youngsuk; Sohn, Joonhong
2014-08-01
To determine if short term effects of intravitreal anti-vascular endothelial growth factor or steroid injection are correlated with fluid turbidity, as detected by spectral domain optical coherence tomography (SD-OCT) in diabetic macular edema (DME) patients. A total of 583 medical records were reviewed and 104 cases were enrolled. Sixty eyes received a single intravitreal bevacizumab injection (IVB) on the first attack of DME and 44 eyes received triamcinolone acetonide treatment (IVTA). Intraretinal fluid turbidity in DME patients was estimated with initial intravitreal SD-OCT and analyzed with color histograms from a Photoshop program. Central macular thickness and visual acuity using a logarithm from the minimum angle of resolution chart, were assessed at the initial period and 2 months after injections. Visual acuity and central macular thickness improved after injections in both groups. In the IVB group, visual acuity and central macular thickness changed less as the intraretinal fluid became more turbid. In the IVTA group, visual acuity underwent less change while central macular thickness had a greater reduction (r = -0.675, p = 0.001) as the intraretinal fluid was more turbid. IVB and IVTA injections were effective in reducing central macular thickness and improving visual acuity in DME patients. Further, fluid turbidity, which was detected by SD-OCT may be one of the indexes that highlight the influence of the steroid-dependent pathogenetic mechanism.
Forensic entomology and the estimation of the minimum time since death in indoor cases.
Bugelli, Valentina; Forni, David; Bassi, Luciani Alessandro; Di Paolo, Marco; Marra, Damiano; Lenzi, Scilla; Toni, Chiara; Giusiani, Mario; Domenici, Ranieri; Gherardi, Mirella; Vanin, Stefano
2015-03-01
Eight cases that occurred indoors in which the insects played an important role in the mPMI estimation are presented. The bodies of socially isolated people and old people living alone were discovered in central Italy between June and November. mPMI ranged from a few days to several weeks. Insects were collected during the body recovery and the postmortem. Climatic data were obtained from the closest meteorological stations and from measurements performed on the site. Sarcophagidae and Calliphoridae species were present in 75% of the cases with Lucilia sericata and Chrysomya albiceps collected in 50% of the cases. Chrysomya albiceps was always found in association with Lucilia species. Scuttle flies (Phoridae) were found in 37.5% of the cases, confirming the ability of these species in indoor body colonization. We show that if sealed environment may delay, the insect arrival dirty houses may create the environment where sarcosaprophagous insects are already present. © 2014 American Academy of Forensic Sciences.
30 Doradus - Ultraviolet and optical stellar photometry
NASA Technical Reports Server (NTRS)
Hill, Jesse K.; Bohlin, Ralph C.; Cheng, Kwang-Ping; Fanelli, Michael N.; Hintzen, Paul; O'Connell, Robert W.; Roberts, Morton S.; Smith, Andrew M.; Smith, Eric P.; Stecher, Theodore P.
1993-01-01
Ultraviolet Imaging Telescope (UIT) UV magnitudes in four bands, together with optical B magnitudes, are presented for up to 314 early-type stars located in a 9.7 x 9.7 arcmin field centered on R136. The magnitudes have an rms uncertainty estimated at 0.10 mag from a comparison between the UIT magnitudes and the IUE spectra. Spectral types and E(B-V) color excesses are estimated. The mean color excesses following the two extinction curves agree well with the predictions of the two-component extinction model of Fitzpatrick and Savage (1984). However, the degree of nebular extinction is found to vary systematically by large amounts over the 30 Dor field. The minimum of nebular extinction in the central parts of the nebula suggests that dust has been expelled from this region by stellar winds. It is suggested that the form of the UV extinction curve can be understood as a consequence of the evolutionary state of the stellar population responsible for making the dust grains.
The small-x gluon distribution in centrality biased pA and pp collisions
NASA Astrophysics Data System (ADS)
Dumitru, Adrian; Kapilevich, Gary; Skokov, Vladimir
2018-06-01
The nuclear modification factor RpA (pT) provides information on the small-x gluon distribution of a nucleus at hadron colliders. Several experiments have recently measured the nuclear modification factor not only in minimum bias but also for central pA collisions. In this paper we analyze the bias on the configurations of soft gluon fields introduced by a centrality selection via the number of hard particles. Such bias can be viewed as reweighting of configurations of small-x gluons. We find that the biased nuclear modification factor QpA (pT) for central collisions is above RpA (pT) for minimum bias events, and that it may redevelop a "Cronin peak" even at small x. The magnitude of the peak is predicted to increase approximately like 1 /A⊥ ν, ν ∼ 0.6 ± 0.1, if one is able to select more compact configurations of the projectile proton where its gluons occupy a smaller transverse area A⊥. We predict an enhanced Qpp (pT) - 1 ∼ 1 /(pT2) ν and a Cronin peak even for central pp collisions.
Klemm, V.; Frank, M.; Levasseur, S.; Halliday, A.N.; Hein, J.R.
2008-01-01
Three ferromanganese crusts from the northeast, northwest and central Atlantic were re-dated using osmium (Os) isotope stratigraphy and yield ages from middle Miocene to the present. The three Os isotope records do not show evidence for growth hiatuses. The reconstructed Os isotope-based growth rates for the sections older than 10??Ma are higher than those determined previously by the combined beryllium isotope (10Be/9Be) and cobalt (Co) constant-flux methods, which results in a decrease in the maximum age of each crust. This re-dating does not lead to significant changes to the interpretation of previously determined radiogenic isotope neodymium, lead (Nd, Pb) time series because the variability of these isotopes was very small in the records of the three crusts prior to 10??Ma. The Os isotope record of the central Atlantic crust shows a pronounced minimum during the middle Miocene between 15 and 12??Ma, similar to a minimum previously observed in two ferromanganese crusts from the central Pacific. For the other two Atlantic crusts, the Os isotope records and their calibration to the global seawater curve for the middle Miocene are either more uncertain or too short and thus do not allow for a reliable identification of an isotopic minimum. Similar to pronounced minima reported previously for the Cretaceous/Tertiary and Eocene/Oligocene boundaries, possible interpretations for the newly identified middle Miocene Os isotope minimum include changes in weathering intensity and/or a meteorite impact coinciding with the formation of the No??rdlinger Ries Crater. It is suggested that the eruption and weathering of the Columbia River flood basalts provided a significant amount of the unradiogenic Os required to produce the middle Miocene minimum. ?? 2008 Elsevier B.V.
Robust Means and Covariance Matrices by the Minimum Volume Ellipsoid (MVE).
ERIC Educational Resources Information Center
Blankmeyer, Eric
P. Rousseeuw and A. Leroy (1987) proposed a very robust alternative to classical estimates of mean vectors and covariance matrices, the Minimum Volume Ellipsoid (MVE). This paper describes the MVE technique and presents a BASIC program to implement it. The MVE is a "high breakdown" estimator, one that can cope with samples in which as…
Into the Past: A Step Towards a Robust Kimberley Rock Art Chronology
Hayward, John
2016-01-01
The recent establishment of a minimum age estimate of 39.9 ka for the origin of rock art in Sulawesi has challenged claims that Western Europe was the locus for the production of the world’s earliest art assemblages. Tantalising excavated evidence found across northern Australian suggests that Australia too contains a wealth of ancient art. However, the dating of rock art itself remains the greatest obstacle to be addressed if the significance of Australian assemblages are to be recognised on the world stage. A recent archaeological project in the northwest Kimberley trialled three dating techniques in order to establish chronological markers for the proposed, regional, relative stylistic sequence. Applications using optically-stimulated luminescence (OSL) provided nine minimum age estimates for fossilised mudwasp nests overlying a range of rock art styles, while Accelerator Mass Spectrometry radiocarbon (AMS 14C) results provided an additional four. Results confirm that at least one phase of the northwest Kimberley rock art assemblage is Pleistocene in origin. A complete motif located on the ceiling of a rockshelter returned a minimum age estimate of 16 ± 1 ka. Further, our results demonstrate the inherent problems in relying solely on stylistic classifications to order rock art assemblages into temporal sequences. An earlier than expected minimum age estimate for one style and a maximum age estimate for another together illustrate that the Holocene Kimberley rock art sequence is likely to be far more complex than generally accepted with different styles produced contemporaneously well into the last few millennia. It is evident that reliance on techniques that produce minimum age estimates means that many more dating programs will need to be undertaken before the stylistic sequence can be securely dated. PMID:27579865
van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew
2015-01-01
Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Conclusions: Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. (Hepatology 2015;61:1174–1182) PMID:25482139
Into the Past: A Step Towards a Robust Kimberley Rock Art Chronology.
Ross, June; Westaway, Kira; Travers, Meg; Morwood, Michael J; Hayward, John
2016-01-01
The recent establishment of a minimum age estimate of 39.9 ka for the origin of rock art in Sulawesi has challenged claims that Western Europe was the locus for the production of the world's earliest art assemblages. Tantalising excavated evidence found across northern Australian suggests that Australia too contains a wealth of ancient art. However, the dating of rock art itself remains the greatest obstacle to be addressed if the significance of Australian assemblages are to be recognised on the world stage. A recent archaeological project in the northwest Kimberley trialled three dating techniques in order to establish chronological markers for the proposed, regional, relative stylistic sequence. Applications using optically-stimulated luminescence (OSL) provided nine minimum age estimates for fossilised mudwasp nests overlying a range of rock art styles, while Accelerator Mass Spectrometry radiocarbon (AMS 14C) results provided an additional four. Results confirm that at least one phase of the northwest Kimberley rock art assemblage is Pleistocene in origin. A complete motif located on the ceiling of a rockshelter returned a minimum age estimate of 16 ± 1 ka. Further, our results demonstrate the inherent problems in relying solely on stylistic classifications to order rock art assemblages into temporal sequences. An earlier than expected minimum age estimate for one style and a maximum age estimate for another together illustrate that the Holocene Kimberley rock art sequence is likely to be far more complex than generally accepted with different styles produced contemporaneously well into the last few millennia. It is evident that reliance on techniques that produce minimum age estimates means that many more dating programs will need to be undertaken before the stylistic sequence can be securely dated.
van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew
2015-04-01
Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. © 2014 The Authors. Hepatology published by Wiley Periodicals, Inc., on behalf of the American Association for the Study of Liver Diseases.
An estimate of the number of tropical tree species.
Slik, J W Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L; Bellingham, Peter J; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L M; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K; Chazdon, Robin L; Robin, Chazdon L; Clark, Connie; Clark, David B; Clark, Deborah A; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A O; Eisenlohr, Pedro V; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A; Joly, Carlos A; de Jong, Bernardus H J; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F; Lawes, Michael J; Amaral, Ieda Leao do; Letcher, Susan G; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H; Meilby, Henrik; Melo, Felipe P L; Metcalfe, Daniel J; Medjibe, Vincent P; Metzger, Jean Paul; Millet, Jerome; Mohandass, D; Montero, Juan C; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T F; Pitman, Nigel C A; Poorter, Lourens; Poulsen, Axel D; Poulsen, John; Powers, Jennifer; Prasad, Rama C; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; Dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A; Santos, Fernanda; Sarker, Swapan K; Satdichanh, Manichanh; Schmitt, Christine B; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I-Fang; Sunderland, Terry; Sunderand, Terry; Suresh, H S; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L C H; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A; Webb, Campbell O; Whitfeld, Timothy; Wich, Serge A; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C Yves; Yap, Sandra L; Yoneda, Tsuyoshi; Zahawi, Rakan A; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L; Garcia Luize, Bruno; Venticinque, Eduardo M
2015-06-16
The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher's alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼ 40,000 and ∼ 53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼ 19,000-25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼ 4,500-6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa.
Effects of weather on survival in populations of boreal toads in Colorado
Scherer, R. D.; Muths, E.; Lambert, B.A.
2008-01-01
Understanding the relationships between animal population demography and the abiotic and biotic elements of the environments in which they live is a central objective in population ecology. For example, correlations between weather variables and the probability of survival in populations of temperate zone amphibians may be broadly applicable to several species if such correlations can be validated for multiple situations. This study focuses on the probability of survival and evaluates hypotheses based on six weather variables in three populations of Boreal Toads (Bufo boreas) from central Colorado over eight years. In addition to suggesting a relationship between some weather variables and survival probability in Boreal Toad populations, this study uses robust methods and highlights the need for demographic estimates that are precise and have minimal bias. Capture-recapture methods were used to collect the data, and the Cormack-Jolly-Seber model in program MARK was used for analysis. The top models included minimum daily winter air temperature, and the sum of the model weights for these models was 0.956. Weaker support was found for the importance of snow depth and the amount of environmental moisture in winter in modeling survival probability. Minimum daily winter air temperature was positively correlated with the probability of survival in Boreal Toads at other sites in Colorado and has been identified as an important covariate in studies in other parts of the world. If air temperatures are an important component of survival for Boreal Toads or other amphibians, changes in climate may have profound impacts on populations. Copyright 2008 Society for the Study of Amphibians and Reptiles.
Approximation for the Rayleigh Resolution of a Circular Aperture
ERIC Educational Resources Information Center
Mungan, Carl E.
2009-01-01
Rayleigh's criterion states that a pair of point sources are barely resolved by an optical instrument when the central maximum of the diffraction pattern due to one source coincides with the first minimum of the pattern of the other source. As derived in standard introductory physics textbooks, the first minimum for a rectangular slit of width "a"…
Rapid Characterization of Large Earthquakes in Chile
NASA Astrophysics Data System (ADS)
Barrientos, S. E.; Team, C.
2015-12-01
Chile, along 3000 km of it 4200 km long coast, is regularly affected by very large earthquakes (up to magnitude 9.5) resulting from the convergence and subduction of the Nazca plate beneath the South American plate. These megathrust earthquakes exhibit long rupture regions reaching several hundreds of km with fault displacements of several tens of meters. Minimum delay characterization of these giant events to establish their rupture extent and slip distribution is of the utmost importance for rapid estimations of the shaking area and their corresponding tsunami-genic potential evaluation, particularly when there are only few minutes to warn the coastal population for immediate actions. The task of a rapid evaluation of large earthquakes is accomplished in Chile through a network of sensors being implemented by the National Seismological Center of the University of Chile. The network is mainly composed approximately by one hundred broad-band and strong motion instruments and 130 GNSS devices; all will be connected in real time. Forty units present an optional RTX capability, where satellite orbits and clock corrections are sent to the field device producing a 1-Hz stream at 4-cm level. Tests are being conducted to stream the real-time raw data to be later processed at the central facility. Hypocentral locations and magnitudes are estimated after few minutes by automatic processing software based on wave arrival; for magnitudes less than 7.0 the rapid estimation works within acceptable bounds. For larger events, we are currently developing automatic detectors and amplitude estimators of displacement coming out from the real time GNSS streams. This software has been tested for several cases showing that, for plate interface events, the minimum magnitude threshold detectability reaches values within 6.2 and 6.5 (1-2 cm coastal displacement), providing an excellent tool for earthquake early characterization from a tsunamigenic perspective.
Li, Jian-fei; Li, Lin; Guo, Luo; Du, Shi-hong
2016-01-01
Urban landscape has the characteristics of spatial heterogeneity. Because the expansion process of urban constructive or ecological land has different resistance values, the land unit stimulates and promotes the expansion of ecological land with different intensity. To compare the effect of promoting and hindering functions in the same land unit, we firstly compared the minimum cumulative resistance value of promoting and hindering functions, and then looked for the balance of two landscape processes under the same standard. According to the ecology principle of minimum limit factor, taking the minimum cumulative resistance analysis method under two expansion processes as the evaluation method of urban land ecological suitability, this research took Zhuhai City as the study area to estimate urban ecological suitability by relative evaluation method with remote sensing image, field survey, and statistics data. With the support of ArcGIS, five types of indicators on landscape types, ecological value, soil erosion sensitivity, sensitivity of geological disasters, and ecological function were selected as input parameters in the minimum cumulative resistance model to compute urban ecological suitability. The results showed that the ecological suitability of the whole Zhuhai City was divided into five levels: constructive expansion prohibited zone (10.1%), constructive expansion restricted zone (32.9%), key construction zone (36.3%), priority development zone (2.3%), and basic cropland (18.4%). Ecological suitability of the central area of Zhuhai City was divided into four levels: constructive expansion prohibited zone (11.6%), constructive expansion restricted zone (25.6%), key construction zone (52.4%), priority development zone (10.4%). Finally, we put forward the sustainable development framework of Zhuhai City according to the research conclusion. On one hand, the government should strictly control the development of the urban center area. On the other hand, the secondary urban center area such as Junchang and Doumen need improve the public infrastructure to relieve the imbalance between eastern and western development in Zhuhai City.
Large floods and climatic change during the Holocene on the Ara River, Central Japan
NASA Astrophysics Data System (ADS)
Grossman, Michael J.
2001-07-01
A reconstruction of part of the Holocene large flood record for the Ara River in central Japan is presented. Maximum intermediate gravel-size dimensions of terrace and modern floodplain gravels were measured along an 18-km reach of the river and were used in tractive force equations to estimate minimum competent flood depths. Results suggest that the magnitudes of large floods on the Ara River have varied in a non-random fashion since the end of the last glacial period. Large floods with greater magnitudes occurred during the warming period of the post-glacial and the warmer early to middle Holocene (to ˜5500 years BP). A shift in the magnitudes of large floods occurred ˜5500-5000 years BP. From this time, during the cooler middle to late Holocene, large floods generally had lower magnitudes. In the modern period, large flood magnitudes are the largest in the data set. As typhoons are the main cause of large floods on the Ara River in the modern record, the variation in large flood magnitudes suggests that the incidence of typhoon visits to the central Japan changed as the climate changed during the Holocene. Further, significant dates in the large flood record on the Ara River correspond to significant dates in Europe and the USA.
NASA Astrophysics Data System (ADS)
Aranha dos Santos, Valentin; Schmetterer, Leopold; Gröschl, Martin; Garhofer, Gerhard; Werkmeister, René M.
2016-03-01
Dry eye syndrome is a highly prevalent disease of the ocular surface characterized by an instability of the tear film. Traditional methods used for the evaluation of tear film stability are invasive or show limited repeatability. Here we propose a new noninvasive approach to measure tear film thickness using an efficient delay estimator and ultrahigh resolution spectral domain OCT. Silicon wafer phantoms with layers of known thickness and group index were used to validate the estimator-based thickness measurement. A theoretical analysis of the fundamental limit of the precision of the estimator is presented and the analytical expression of the Cramér-Rao lower bound (CRLB), which is the minimum variance that may be achieved by any unbiased estimator, is derived. The performance of the estimator against noise was investigated using simulations. We found that the proposed estimator reaches the CRLB associated with the OCT amplitude signal. The technique was applied in vivo in healthy subjects and dry eye patients. Series of tear film thickness maps were generated, allowing for the visualization of tear film dynamics. Our results show that the central tear film thickness precisely measured in vivo with a coefficient of variation of about 0.65% and that repeatable tear film dynamics can be observed. The presented method has the potential of being an alternative to breakup time measurements (BUT) and could be used in clinical setting to study patients with dry eye disease and monitor their treatments.
Balancing Score Adjusted Targeted Minimum Loss-based Estimation
Lendle, Samuel David; Fireman, Bruce; van der Laan, Mark J.
2015-01-01
Adjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies. PMID:26561539
Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration
NASA Astrophysics Data System (ADS)
Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.
2017-04-01
Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn season and was equal to 28 m3/day whereas the lowest flux was -5.6 m3/day in spring. The spatial distribution also shows that maximum groundwater recharge estimated was in the southeast of the region due to the lack of vegetation cover and deep groundwater levels. Lowest groundwater recharge estimated in urban and agricultural areas in the northwest of the Salland area. The overall conclusion of this study is that groundwater level fluctuations in the Salland area are affected by seasonal climatic variations specially precipitation and evapotranspiration. Such however was not supported by the SEBAL images which proved to be unreliable.
Cosmic Ray Hits in the Central Nervous System at Solar Maximum
NASA Technical Reports Server (NTRS)
Curtis, S. B.; Vazquez, M. E.; Wilson, J. W.; Atwell, W.; Kin, M.-H. Y.
2000-01-01
It has been suggested that a manned mission to Mars be launched at solar maximum rather than at solar minimum to minimize the radiation exposure to galactic cosmic rays. It is true that the number of hits from highly ionizing particles to critical regions in the brain will be less at solar maximum, and it is of interest to estimate how much less. We present here calculations for several sites within the brain from iron ions (z = 26) and from particles with charge, z, greater than or equal to 15. The same shielding configurations and sites in the brain used in an earlier paper for solar minimum are employed so that direct comparison of results between the two solar activity conditions can be made. A simple pressure-vessel wall and an equipment room onboard a spacecraft are chosen as shielding examples. In the equipment room, typical results for the thalamus are that the probability of any particles with z greater than or equal to 15 and from 2.3 percent to 1.3 percent for iron ions. The extra shielding provided in the equipment room makes little difference in these numbers. We conclude that this decrease in hit frequency (less than a factor of two) does not provide a compelling reason to avoid solar minimum for a manned mission to Mars. This conclusion could be revised, however, if a very small number of hits is found to cause critical malfunction within the brain.
A CLT on the SNR of Diagonally Loaded MVDR Filters
NASA Astrophysics Data System (ADS)
Rubio, Francisco; Mestre, Xavier; Hachem, Walid
2012-08-01
This paper studies the fluctuations of the signal-to-noise ratio (SNR) of minimum variance distorsionless response (MVDR) filters implementing diagonal loading in the estimation of the covariance matrix. Previous results in the signal processing literature are generalized and extended by considering both spatially as well as temporarily correlated samples. Specifically, a central limit theorem (CLT) is established for the fluctuations of the SNR of the diagonally loaded MVDR filter, under both supervised and unsupervised training settings in adaptive filtering applications. Our second-order analysis is based on the Nash-Poincar\\'e inequality and the integration by parts formula for Gaussian functionals, as well as classical tools from statistical asymptotic theory. Numerical evaluations validating the accuracy of the CLT confirm the asymptotic Gaussianity of the fluctuations of the SNR of the MVDR filter.
NASA Technical Reports Server (NTRS)
Imhoff, Marc L.; Bounoua, Lahouari; Harriss, Robert; Harriss, Robert; Wells, Gordon; Glantz, Michael; Dukhovny, Victor A.; Orlovsky, Leah
2007-01-01
An inverse process approach using satellite-driven (MODIS) biophysical modeling was used to quantitatively assess water resource demand in semi-arid and arid agricultural lands by comparing the carbon and water flux modeled under both equilibrium (in balance with prevailing climate) and non-equilibrium (irrigated) conditions. Since satellite observations of irrigated areas show higher leaf area indices (LAI) than is supportable by local precipitation, we postulate that the degree to which irrigated lands vary from equilibrium conditions is related to the amount of irrigation water used. For an observation year we used MODIS vegetation indices, local climate data, and the SiB2 photosynthesis-conductance model to examine the relationship between climate and the water stress function for a given grid-cell and observed leaf area. To estimate the minimum amount of supplemental water required for an observed cell, we added enough precipitation to the prevailing climatology at each time step to minimize the water stress function and bring the soil to field capacity. The experiment was conducted on irrigated lands on the U.S. Mexico border and Central Asia and compared to estimates of irrigation water used.
ERIC Educational Resources Information Center
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
Development of an adaptive harvest management program for Taiga bean geese
Johnson, Fred A.; Alhainen, Mikko; Fox, Anthony D.; Madsen, Jesper
2016-01-01
This report describes recent progress in specifying the elements of an adaptive harvest program for taiga bean goose. It describes harvest levels appropriate for first rebuilding the population of the Central Management Unit and then maintaining it near the goal specified in the AEWA International Single Species Action Plan (ISSAP). This report also provides estimates of the length of time it would take under ideal conditions (no density dependence and no harvest) to rebuild depleted populations in the Western and Eastern Management Units. We emphasize that our estimates are a first approximation because detailed demographic information is lacking for taiga bean geese. Using allometric relationships, we estimated parameters of a thetalogistic matrix population model. The mean intrinsic rate of growth was estimated as r = 0.150 (90% credible interval: 0.120 – 0.182). We estimated the mean form of density dependence as 2.361 (90% credible interval: 0.473 – 11.778), suggesting the strongest density dependence occurs when the population is near its carrying capacity. Based on expert opinion, carrying capacity (i.e., population size expected in the absence of hunting) for the Central Management Unit was estimated as K 87,900 (90% credible interval: 82,000 – 94,100). The ISSAP specifies a population goal for the Central Management Unit of 60,000 – 80,000 individuals in winter; thus, we specified a preliminary objective function as one which would minimize the difference between this goal and population size. Using the concept of stochastic dominance to explicitly account for uncertainty in demography, we determined that optimal harvest rates for 5, 10, 15, and 20-year time horizons were h = 0.00, 0.02, 0.05, and 0.06, respectively. These optima represent a tradeoff between the harvest rate and the time required to achieve and maintain a population size within desired bounds. We recognize, however, that regulation of absolute harvest rather than harvest rate is more practical, but our matrix model does not permit one to calculate an exact harvest associated with a specific harvest rate. Approximate harvests for current population size in the Central Management Unit are 0, 1,200, 2,300, and 3,500 for the 5, 10, 15, and 20-year time horizons, respectively. Populations of taiga bean geese in the Western and Eastern Units would require at least 10 and 13 years, respectively, to reach their minimum goals under the most optimistic of scenarios. The presence of harvest, density dependence, or environmental variation could extend these time frames considerably. Finally, we stress that development and implementation of internationally coordinated monitoring programs will be essential to further development and implementation of an adaptive harvest management program.
Rail vs truck transport of biomass.
Mahmudi, Hamed; Flynn, Peter C
2006-01-01
This study analyzes the economics of transshipping biomass from truck to train in a North American setting. Transshipment will only be economic when the cost per unit distance of a second transportation mode is less than the original mode. There is an optimum number of transshipment terminals which is related to biomass yield. Transshipment incurs incremental fixed costs, and hence there is a minimum shipping distance for rail transport above which lower costs/km offset the incremental fixed costs. For transport by dedicated unit train with an optimum number of terminals, the minimum economic rail shipping distance for straw is 170 km, and for boreal forest harvest residue wood chips is 145 km. The minimum economic shipping distance for straw exceeds the biomass draw distance for economically sized centrally located power plants, and hence the prospects for rail transport are limited to cases in which traffic congestion from truck transport would otherwise preclude project development. Ideally, wood chip transport costs would be lowered by rail transshipment for an economically sized centrally located power plant, but in a specific case in Alberta, Canada, the layout of existing rail lines precludes a centrally located plant supplied by rail, whereas a more versatile road system enables it by truck. Hence for wood chips as well as straw the economic incentive for rail transport to centrally located processing plants is limited. Rail transshipment may still be preferred in cases in which road congestion precludes truck delivery, for example as result of community objections.
The small-x gluon distribution in centrality biased pA and pp collisions
Dumitru, Adrian; Kapilevich, Gary; Skokov, Vladimir
2018-04-04
Here, the nuclear modification factor R pA(p T) provides information on the small- x gluon distribution of a nucleus at hadron colliders. Several experiments have recently measured the nuclear modification factor not only in minimum bias but also for central pA collisions. In this paper we analyze the bias on the configurations of soft gluon fields introduced by a centrality selection via the number of hard particles. Such bias can be viewed as reweighting of configurations of small- x gluons. We find that the biased nuclear modification factor Q pA(p T) for central collisions is above R pA(p T) formore » minimum bias events, and that it may redevelop a “Cronin peak” even at small x . The magnitude of the peak is predicted to increase approximately like 1/A ⊥ ν, ν~0.6±0.1 , if one is able to select more compact configurations of the projectile proton where its gluons occupy a smaller transverse area A ⊥. We predict an enhanced Q pp(p T)–1~1/(p T 2) ν and a Cronin peak even for central pp collisions.« less
The small-x gluon distribution in centrality biased pA and pp collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitru, Adrian; Kapilevich, Gary; Skokov, Vladimir
Here, the nuclear modification factor R pA(p T) provides information on the small- x gluon distribution of a nucleus at hadron colliders. Several experiments have recently measured the nuclear modification factor not only in minimum bias but also for central pA collisions. In this paper we analyze the bias on the configurations of soft gluon fields introduced by a centrality selection via the number of hard particles. Such bias can be viewed as reweighting of configurations of small- x gluons. We find that the biased nuclear modification factor Q pA(p T) for central collisions is above R pA(p T) formore » minimum bias events, and that it may redevelop a “Cronin peak” even at small x . The magnitude of the peak is predicted to increase approximately like 1/A ⊥ ν, ν~0.6±0.1 , if one is able to select more compact configurations of the projectile proton where its gluons occupy a smaller transverse area A ⊥. We predict an enhanced Q pp(p T)–1~1/(p T 2) ν and a Cronin peak even for central pp collisions.« less
Prentice, C.S.; Mann, P.; Pena, L.R.; Burr, G.
2003-01-01
The Septentrional fault zone (SFZ) is the major North American-Caribbean, strike-slip, plate boundary fault at the longitude of eastern Hispaniola. The SFZ traverses the densely populated Cibao Valley of the Dominican Republic, forming a prominent scarp in alluvium. Our studies at four sites along the central SFZ are aimed at quantifying the late Quaternary behavior of this structure to better understand the seismic hazard it represents for the northeastern Caribbean. Our investigations of excavations at sites near Rio Cenovi show that the most recent ground-rupturing earthquake along this fault in the north central Dominican Republic occurred between A.D. 1040 and A.D. 1230, and involved a minimum of ???4 m of left-lateral slip and 2.3 m of normal dip slip at that site. Our studies of offset stream terraces at two locations, Rio Juan Lopez and Rio Licey, provide late Holocene slip rate estimates of 6-9 mm/yr and a maximum of 11-12 mm/yr, respectively, across the Septentrional fault. Combining these results gives a best estimate of 6-12 mm/yr for the slip rate across the SFZ. Three excavations, two near Tenares and one at the Rio Licey site, yielded evidence for the occurrence of earlier prehistoric earthquakes. Dates of strata associated with the penultimate event suggest that it occurred post-A.D. 30, giving a recurrence interval of 800-1200 years. These studies indicate that the SFZ has likely accumulated elastic strain sufficient to generate a major earthquake during the more than 800 years since it last slipped and should be considered likely to produce a destructive future earthquake.
Minimum Wages and the Economic Well-Being of Single Mothers
ERIC Educational Resources Information Center
Sabia, Joseph J.
2008-01-01
Using pooled cross-sectional data from the 1992 to 2005 March Current Population Survey (CPS), this study examines the relationship between minimum wage increases and the economic well-being of single mothers. Estimation results show that minimum wage increases were ineffective at reducing poverty among single mothers. Most working single mothers…
Minimum number of measurements for evaluating Bertholletia excelsa.
Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E
2017-09-27
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.
Isothermal elastohydrodynamic lubrication of point contacts. 4: Starvation results
NASA Technical Reports Server (NTRS)
Hamrock, B. J.; Dowson, D.
1976-01-01
The influence of lubricant starvation on minimum film thickness was investigated by moving the inlet boundary closer to the contact center. The following expression was derived for the dimensionless inlet distance at the boundary between the fully flooded and starved conditions: m* = 1 + 3.06 ((R/b)(R/b)H) to the power 0.58, where R is the effective radius of curvature, b is the semiminor axis of the contact ellipse, and H is the central film thickness for fully flooded conditions. A corresponding expression was also given based on the minimum film thickness for fully flooded conditions. Therefore, for m m*, starvation occurs and, for m m*, a fully flooded condition exists. Two other expressions were also derived for the central and minimum film thicknesses for a starved condition. Contour plots of the pressure and the film thickness in and around the contact are shown for the fully flooded and starved lubricating conditions, from which the film thickness was observed to decrease substantially as starvation increases.
The minimum test battery to screen for binocular vision anomalies: report 3 of the BAND study.
Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; Swaminathan, Meenakshi; George, Ronnie; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar
2018-03-01
This study aims to report the minimum test battery needed to screen non-strabismic binocular vision anomalies (NSBVAs) in a community set-up. When large numbers are to be screened we aim to identify the most useful test battery when there is no opportunity for a more comprehensive and time-consuming clinical examination. The prevalence estimates and normative data for binocular vision parameters were estimated from the Binocular Vision Anomalies and Normative Data (BAND) study, following which cut-off estimates and receiver operating characteristic curves to identify the minimum test battery have been plotted. In the receiver operating characteristic phase of the study, children between nine and 17 years of age were screened in two schools in the rural arm using the minimum test battery, and the prevalence estimates with the minimum test battery were found. Receiver operating characteristic analyses revealed that near point of convergence with penlight and red filter (> 7.5 cm), monocular accommodative facility (< 10 cycles per minute), and the difference between near and distance phoria (> 1.25 prism dioptres) were significant factors with cut-off values for best sensitivity and specificity. This minimum test battery was applied to a cohort of 305 children. The mean (standard deviation) age of the subjects was 12.7 (two) years with 121 males and 184 females. Using the minimum battery of tests obtained through the receiver operating characteristic analyses, the prevalence of NSBVAs was found to be 26 per cent. Near point of convergence with penlight and red filter > 10 cm was found to have the highest sensitivity (80 per cent) and specificity (73 per cent) for the diagnosis of convergence insufficiency. For the diagnosis of accommodative infacility, monocular accommodative facility with a cut-off of less than seven cycles per minute was the best predictor for screening (92 per cent sensitivity and 90 per cent specificity). The minimum test battery of near point of convergence with penlight and red filter, difference between distance and near phoria, and monocular accommodative facility yield good sensitivity and specificity for diagnosis of NSBVAs in a community set-up. © 2017 Optometry Australia.
An estimate of the number of tropical tree species
Slik, J. W. Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F.; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L.; Bellingham, Peter J.; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q.; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L. M.; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K.; Chazdon, Robin L.; Clark, Connie; Clark, David B.; Clark, Deborah A.; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S.; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J.; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A. O.; Eisenlohr, Pedro V.; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J.; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T.; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M.; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A.; Joly, Carlos A.; de Jong, Bernardus H. J.; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L.; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F.; Lawes, Michael J.; do Amaral, Ieda Leao; Letcher, Susan G.; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H.; Meilby, Henrik; Melo, Felipe P. L.; Metcalfe, Daniel J.; Medjibe, Vincent P.; Metzger, Jean Paul; Millet, Jerome; Mohandass, D.; Montero, Juan C.; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T. F.; Pitman, Nigel C. A.; Poorter, Lourens; Poulsen, Axel D.; Poulsen, John; Powers, Jennifer; Prasad, Rama C.; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A.; Santos, Fernanda; Sarker, Swapan K.; Satdichanh, Manichanh; Schmitt, Christine B.; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S.; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I.-Fang; Sunderland, Terry; Suresh, H. S.; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W.; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L. C. H.; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A.; Webb, Campbell O.; Whitfeld, Timothy; Wich, Serge A.; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C. Yves; Yap, Sandra L.; Yoneda, Tsuyoshi; Zahawi, Rakan A.; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L.; Garcia Luize, Bruno; Venticinque, Eduardo M.
2015-01-01
The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher’s alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼40,000 and ∼53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼19,000–25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼4,500–6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa. PMID:26034279
NASA Technical Reports Server (NTRS)
Schatten, K. H.; Scherrer, P. H.; Svalgaard, L.; Wilcox, J. M.
1978-01-01
On physical grounds it is suggested that the sun's polar field strength near a solar minimum is closely related to the following cycle's solar activity. Four methods of estimating the sun's polar magnetic field strength near solar minimum are employed to provide an estimate of cycle 21's yearly mean sunspot number at solar maximum of 140 plus or minus 20. This estimate is considered to be a first order attempt to predict the cycle's activity using one parameter of physical importance.
Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation
NASA Astrophysics Data System (ADS)
Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.
2013-08-01
In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.
Furutani, Eiko; Nishigaki, Yuki; Kanda, Chiaki; Takeda, Toshihiro; Shirakami, Gotaro
2013-01-01
This paper proposes a novel hypnosis control method using Auditory Evoked Potential Index (aepEX) as a hypnosis index. In order to avoid side effects of an anesthetic drug, it is desirable to reduce the amount of an anesthetic drug during surgery. For this purpose many studies of hypnosis control systems have been done. Most of them use Bispectral Index (BIS), another hypnosis index, but it has problems of dependence on anesthetic drugs and nonsmooth change near some particular values. On the other hand, aepEX has an ability of clear distinction between patient consciousness and unconsciousness and independence of anesthetic drugs. The control method proposed in this paper consists of two elements: estimating the minimum effect-site concentration for maintaining appropriate hypnosis and adjusting infusion rate of an anesthetic drug, propofol, using model predictive control. The minimum effect-site concentration is estimated utilizing the property of aepEX pharmacodynamics. The infusion rate of propofol is adjusted so that effect-site concentration of propofol may be kept near and always above the minimum effect-site concentration. Simulation results of hypnosis control using the proposed method show that the minimum concentration can be estimated appropriately and that the proposed control method can maintain hypnosis adequately and reduce the total infusion amount of propofol.
Assessment of corneal epithelial thickness in dry eye patients.
Cui, Xinhan; Hong, Jiaxu; Wang, Fei; Deng, Sophie X; Yang, Yujing; Zhu, Xiaoyu; Wu, Dan; Zhao, Yujin; Xu, Jianjiang
2014-12-01
To investigate the features of corneal epithelial thickness topography with Fourier-domain optical coherence tomography (OCT) in dry eye patients. In this cross-sectional study, 100 symptomatic dry eye patients and 35 normal subjects were enrolled. All participants answered the ocular surface disease index questionnaire and were subjected to OCT, corneal fluorescein staining, tear breakup time, Schirmer 1 test without anesthetic (S1t), and meibomian morphology. Several epithelium statistics for each eye, including central, superior, inferior, minimum, maximum, minimum - maximum, and map standard deviation, were averaged. Correlations of epithelial thickness with the symptoms of dry eye were calculated. The mean (±SD) central, superior, and inferior corneal epithelial thickness was 53.57 (±3.31) μm, 52.00 (±3.39) μm, and 53.03 (±3.67) μm in normal eyes and 52.71 (±2.83) μm, 50.58 (±3.44) μm, and 52.53 (±3.36) μm in dry eyes, respectively. The superior corneal epithelium was thinner in dry eye patients compared with normal subjects (p = 0.037), whereas central and inferior epithelium were not statistically different. In the dry eye group, patients with higher severity grades had thinner superior (p = 0.017) and minimum (p < 0.001) epithelial thickness, more wide range (p = 0.032), and greater deviation (p = 0.003). The average central epithelial thickness had no correlation with tear breakup time, S1t, or the severity of meibomian glands, whereas average superior epithelial thickness positively correlated with S1t (r = 0.238, p = 0.017). Fourier-domain OCT demonstrated that the thickness map of the dry eye corneal epithelium was thinner than normal eyes in the superior region. In more severe dry eye disease patients, the superior and minimum epithelium was much thinner, with a greater range of map standard deviation.
New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction
NASA Astrophysics Data System (ADS)
Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.
2017-12-01
Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Minimum Wage Increases and the Working Poor. Changing Domestic Priorities Discussion Paper.
ERIC Educational Resources Information Center
Mincy, Ronald B.
Most economists agree that the difficulties of targeting minimum wage increases to low-income families make such increases ineffective tools for reducing poverty. This paper provides estimates of the impact of minimum wage increases on the poverty gap and the number of poor families, and shows which factors are barriers to decreasing poverty…
Minimum Wages and School Enrollment of Teenagers: A Look at the 1990's.
ERIC Educational Resources Information Center
Chaplin, Duncan D.; Turner, Mark D.; Pape, Andreas, D.
2003-01-01
Estimates the effects of higher minimum wages on school enrollment using the Common Core of Data. Controlling for local labor market conditions and state and year fixed effects, finds some evidence that higher minimum wages reduce teen school enrollment in states where students drop out before age 18. (23 references) (Author/PKP)
Wood, Dustin A.; Halstead, Brian J.; Casazza, Michael L.; Hansen, Eric C.; Wylie, Glenn D.; Vandergast, Amy
2015-01-01
Anthropogenic habitat fragmentation can disrupt the ability of species to disperse across landscapes, which can alter the levels and distribution of genetic diversity within populations and negatively impact long-term viability. The giant gartersnake (Thamnophis gigas) is a state and federally threatened species that historically occurred in the wetland habitats of California’s Great Central Valley. Despite the loss of 93 % of historic wetlands throughout the Central Valley, giant gartersnakes continue to persist in relatively small, isolated patches of highly modified agricultural wetlands. Gathering information regarding genetic diversity and effective population size represents an essential component for conservation management programs aimed at this species. Previous mitochondrial sequence studies have revealed historical patterns of differentiation, yet little is known about contemporary population structure and diversity. On the basis of 15 microsatellite loci, we estimate population structure and compare indices of genetic diversity among populations spanning seven drainage basins within the Central Valley. We sought to understand how habitat loss may have affected genetic differentiation, genetic diversity and effective population size, and what these patterns suggest in terms of management and restoration actions. We recovered five genetic clusters that were consistent with regional drainage basins, although three northern basins within the Sacramento Valley formed a single genetic cluster. Our results show that northern drainage basin populations have higher connectivity than among central and southern basins populations, and that greater differentiation exists among the more geographically isolated populations in the central and southern portion of the species’ range. Genetic diversity measures among basins were significantly different, and were generally lower in southern basin populations. Levels of inbreeding and evidence of population bottlenecks were detected in about half the populations we sampled, and effective population size estimates were well below recommended minimum thresholds to avoid inbreeding. Efforts focused on maintaining and enhancing existing wetlands to facilitate dispersal between basins and increase local effective population sizes may be critical for these otherwise isolated populations.
Continuously Deformation Monitoring of Subway Tunnel Based on Terrestrial Point Clouds
NASA Astrophysics Data System (ADS)
Kang, Z.; Tuo, L.; Zlatanova, S.
2012-07-01
The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the common control points can be used by each station and thus the error accumulation avoided within a section. Afterwards, the central axis of the subway tunnel is determined through RANSAC (Random Sample Consensus) algorithm and curve fitting. Although with very high resolution, laser points are still discrete and thus the vertical section is computed via the quadric fitting of the vicinity of interest, instead of the fitting of the whole model of a subway tunnel, which is determined by the intersection line rotated about the central axis of tunnel within a vertical plane. The extraction of the vertical section is then optimized using RANSAC for the purpose of filtering out noises. Based on the extracted vertical sections, the volume of tunnel deformation is estimated by the comparison between vertical sections extracted at the same position from different epochs of point clouds. Furthermore, the continuously extracted vertical sections are deployed to evaluate the convergent tendency of the tunnel. The proposed algorithms are verified using real datasets in terms of accuracy and computation efficiency. The experimental result of fitting accuracy analysis shows the maximum deviation between interpolated point and real point is 1.5 mm, and the minimum one is 0.1 mm; the convergent tendency of the tunnel was detected by the comparison of adjacent fitting radius. The maximum error is 6 mm, while the minimum one is 1 mm. The computation cost of vertical section abstraction is within 3 seconds/section, which proves high efficiency..
Personius, Stephen F.; Crone, Anthony J.; Machette, Michael N.; Mahan, Shannon; Lidke, David J.
2009-01-01
The 86-km-long Surprise Valley normal fault forms part of the active northwestern margin of the Basin and Range province in northeastern California. We use trench mapping and radiocarbon, luminescence, and tephra dating to estimate displacements and timing of the past five surface-rupturing earthquakes on the central part of the fault near Cedarville. A Bayesian OxCal analysis of timing constraints indicates earthquake times of 18.2 ± 2.6, 10.9 ± 3.2, 8.5 ± 0.5, 5.8 ± 1.5, and 1.2 ± 0.1 ka. These data yield recurrence intervals of 7.3 ± 4.1, 2.5 ± 3.2, 2.7 ± 1.6, and 4.5 ± 1.5 ka and an elapsed time of 1.2 ± 0.1 ka since the latest surface-rupturing earthquake. Our best estimate of latest Quaternary vertical slip rate is 0.6 ?? 0.1 mm/a. This late Quaternary rate is remarkably similar to long-term (8-14 Ma) minimum vertical slip rates (>0.4-0.5 ± 0.3 mm/a) calculated from recently acquired seismic reflection and chronologic and structural data in Surprise Valley and the adjacent Warner Mountains. However, our slip rate yields estimates of extension that are lower than recent campaign GPS determinations by factors of 1.5-4 unless the fault has an unusually shallow (30°-35°) dip as suggested by recently acquired seismic reflection data. Coseismic displacements of 2-4.5 ± 1 m documented in the trench and probable rupture lengths of 53-65 km indicate a history of latest Quaternary earthquakes of M 6.8-7.3 on the central part of the. Surprise Valley fault.
Alaska coal geology, resources, and coalbed methane potential
Flores, Romeo M.; Stricker, Gary D.; Kinney, Scott A.
2004-01-01
Estimated Alaska coal resources are largely in Cretaceous and Tertiary rocks distributed in three major provinces. Northern Alaska-Slope, Central Alaska-Nenana, and Southern Alaska-Cook Inlet. Cretaceous resources, predominantly bituminous coal and lignite, are in the Northern Alaska-Slope coal province. Most of the Tertiary resources, mainly lignite to subbituminous coal with minor amounts of bituminous and semianthracite coals, are in the other two provinces. The combined measured, indicated, inferred, and hypothetical coal resources in the three areas are estimated to be 5,526 billion short tons (5,012 billion metric tons), which constitutes about 87 percent of Alaska's coal and surpasses the total coal resources of the conterminous United States by 40 percent. Coal mining has been intermittent in the Central Alaskan-Nenana and Southern Alaska-Cook Inlet coal provinces, with only a small fraction of the identified coal resource having been produced from some dozen underground and strip mines in these two provinces. Alaskan coal resources have a lower sulfur content (averaging 0.3 percent) than most coals in the conterminous United States are within or below the minimum sulfur value mandated by the 1990 Clean Air Act amendments. The identified resources are near existing and planned infrastructure to promote development, transportation, and marketing of this low-sulfur coal. The relatively short distances to countries in the west Pacific Rim make them more exportable to these countries than to the lower 48 States of the United States. Another untapped but potential resource of large magnitude is coalbed methane, which has been estimated to total 1,000 trillion cubic feet (28 trillion cubic meters) by T.N. Smith 1995, Coalbed methane potential for Alaska and drilling results for the upper Cook Inlet Basin: Intergas, May 15 - 19, 1995, Tuscaloosa, University of Alabama, p. 1 - 21.
NASA Astrophysics Data System (ADS)
Mishra, V.; Cruise, J. F.; Mecikalski, J. R.
2015-12-01
Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field data. In this study, vertical soil moisture profiles were developed using the POME model to evaluate an irrigation schedule over a maze field in north central Alabama (USA). The model was validated using both field data and a physically based mathematical model. The results demonstrate that a simple two-constraint entropy model under the assumption of a uniform initial soil moisture distribution can simulate most soil moisture profiles within the field area for 6 different soil types. The results of the irrigation simulation demonstrated that the POME model produced a very efficient irrigation strategy with loss of about 1.9% of the total applied irrigation water. However, areas of fine-textured soil (i.e. silty clay) resulted in plant stress of nearly 30% of the available moisture content due to insufficient water supply on the last day of the drying phase of the irrigation cycle. Overall, the POME approach showed promise as a general strategy to guide irrigation in humid environments, with minimum input requirements.
PHENIX results on open heavy flavor production
NASA Astrophysics Data System (ADS)
Hachiya, Takashi
2018-02-01
PHENIX measures the open heavy flavor productions in p + p, Cu+Au, and Au+Au collisions at = 200 and 510 GeV using the silicon tracking detectors for mid- and forward rapidities. In Au+Au collisions, the nuclear modification of single electrons from bottom and charm hadron decays are measured for minimum bias and most central collisions. It is found that bottoms are less suppressed than charms in pT=3-5 GeV/c and charms in most central collisions are more suppressed than that in minimum bias collisions. In p + p and Cu+Au collisions, J/ψ from B meson decays are measured at forward and backward rapidities. The nuclear modification of B mesons in Cu+Au collisions is consistent with unity.
NASA Astrophysics Data System (ADS)
Ray, Arijit; Hatui, Kalyanbrata; Paul, Dalim Kumar; Sen, Gautam; Biswas, S. K.; Das, Brindaban
2016-02-01
Kutch rift basin of northwestern India is characterized by a topography that is controlled by a number of fault controlled uplifted blocks. Kutch Mainland Uplift, the largest uplifted block in the central part of the basin, contains alkali basalt plugs and tholeiitic basalt flows of the Deccan age. Alkali plugs often contain small, discoidal mantle xenoliths of spinel lherzolite and spinel wehrlite composition. Olivine occurs as xenocrysts (coarse, fractured, broken olivine grains with embayed margin; Fo> 90), phenocrysts (euhedral, smaller, and less forsteritic ~ Fo80), and as groundmass grains (small, anhedral, Fo75) in these alkali basalts. In a few cases, the alkali plugs are connected with feeder dykes. Based on the width of feeder dykes, on the sizes of the xenocrysts and xenoliths, thickness of alteration rim around olivine xenocryst, we estimate that the alkali magmas erupted at a minimum speed of 0.37 km per hour. The speed was likely greater because of the fact that the xenoliths broke up into smaller fragments as their host magma ascended through the lithosphere.
NASA Technical Reports Server (NTRS)
Day, R. L.; Petersen, G. W.
1983-01-01
Thermal-infrared data from the Heat Capacity Mapping Mission satellite were used to map the spatial distribution of diurnal surface temperatures and to estimate mean annual soil temperatures (MAST) and annual surface temperature amplitudes (AMP) in semi-arid east central Utah. Diurnal data with minimal snow and cloud cover were selected for five dates throughout a yearly period and geometrically co-registered. Rubber-sheet stretching was aided by the WARP program which allowed preview of image transformations. Daytime maximum and nighttime minimum temperatures were averaged to generation average daily temperature (ADT) data set for each of the five dates. Five ADT values for each pixel were used to fit a sine curve describing the theoretical annual surface temperature response as defined by a solution of a one-dimensinal heat flow equation. Linearization of the equation produced estimates of MAST and AMP plus associated confidence statistics. MAST values were grouped into classes and displayed on a color video screen. Diurnal surface temperatures and MAST were primarily correlated with elevation.
ERIC Educational Resources Information Center
Haryati, Sri
2014-01-01
The study aims at analyzing the achievement of Minimum Service Standards (MSS) in Basic Education through a case study at Magelang Municipality. The findings shall be used as a starting point to predict the needs to meet MMS by 2015 and to provide strategies for achievement. Both primary and secondary data were used in the study investigating the…
The Effect of Minimum Wages on Youth Employment in Canada: A Panel Study.
ERIC Educational Resources Information Center
Yuen, Terence
2003-01-01
Canadian panel data 1988-90 were used to compare estimates of minimum-wage effects based on a low-wage/high-worker sample and a low-wage-only sample. Minimum-wage effect for the latter is nearly zero. Different results for low-wage subgroups suggest a significant effect for those with longer low-wage histories. (Contains 26 references.) (SK)
Compact, low-loss and low-power 8×8 broadband silicon optical switch.
Chen, Long; Chen, Young-kai
2012-08-13
We demonstrated a 8×8 broadband optical switch on silicon for transverse-electrical polarization using a switch-and-selector architecture. The switch has a footprint of only 8 mm × 8 mm, minimum on-chip loss of about 4 dB, and a port-to-port insertion loss variation of only 0.8 dB near some spectral regions. The port-to-port isolation is above 30 dB over the entire 80-nm-wide spectral range or above 45 dB near the central 30 nm. We also demonstrated a switching power of less than 1.5 mW per element and a speed of 2 kHz, and estimated the upper bound of total power consumption to be less than 70 mW even without optimization of the default state of the individual switch elements.
NASA Technical Reports Server (NTRS)
Mohr, Karen I.; Slayback, Daniel; Yager, Karina
2014-01-01
The central Andes extends from 7 deg to 21 deg S, with its eastern boundary defined by elevation (1000m and greater) and its western boundary by the coastline. The authors used a combination of surface observations, reanalysis, and the University of Utah Tropical Rainfall Measuring Mission (TRMM) precipitation features (PF) database to understand the characteristics of convective systems and associated rainfall in the central Andes during the TRMM era, 1998-2012. Compared to other dry (West Africa), mountainous (Himalayas), and dynamically linked (Amazon) regions in the tropics, the central Andes PF population was distinct from these other regions, with small and weak PFs dominating its cumulative distribution functions and annual rainfall totals. No more than 10% of PFs in the central Andes met any of the thresholds used to identify and define deep convection (minimum IR cloud-top temperatures, minimum 85-GHz brightness temperature, maximum height of the 40-dBZ echo). For most of the PFs, available moisture was limited (less than 35mm) and instability low (less than 500 J kg(exp -1)). The central Andes represents a largely stable, dry to arid environment, limiting system development and organization. Hence, primarily short-duration events (less than 60 min) characterized by shallow convection and light to light-moderate rainfall rates (0.5-4.0 mm h(exp -1)) were found.
NASA Astrophysics Data System (ADS)
Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan
2017-12-01
Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).
NASA Astrophysics Data System (ADS)
Lobit, P.; López Pérez, L.; Lhomme, J. P.; Gómez Tagle, A.
2017-07-01
This study evaluates the dew point method (Allen et al. 1998) to estimate atmospheric vapor pressure from minimum temperature, and proposes an improved model to estimate it from maximum and minimum temperature. Both methods were evaluated on 786 weather stations in Mexico. The dew point method induced positive bias in dry areas but also negative bias in coastal areas, and its average root mean square error for all evaluated stations was 0.38 kPa. The improved model assumed a bi-linear relation between estimated vapor pressure deficit (difference between saturated vapor pressure at minimum and average temperature) and measured vapor pressure deficit. The parameters of these relations were estimated from historical annual median values of relative humidity. This model removed bias and allowed for a root mean square error of 0.31 kPa. When no historical measurements of relative humidity were available, empirical relations were proposed to estimate it from latitude and altitude, with only a slight degradation on the model accuracy (RMSE = 0.33 kPa, bias = -0.07 kPa). The applicability of the method to other environments is discussed.
Chen, Boris B.; Sverdlik, Leonid G.; Imashev, Sanjar A.; ...
2013-01-01
The vertical structure of aerosol optical and physical properties was measured by Lidar in Eastern Kyrgyzstan, Central Asia, from June 2008 to May 2009. Lidar measurements were supplemented with surface-based measurements of PM 2.5 and PM 10 mass and chemical composition in both size fractions. Dust transported into the region is common, being detected 33% of the time. The maximum frequency occurred in the spring of 2009. Dust transported to Central Asia comes from regional sources, for example, Taklimakan desert and Aral Sea basin, and from long-range transport, for example, deserts of Arabia, Northeast Africa, Iran, and Pakistan. Regional sourcesmore » are characterized by pollution transport with maximum values of coarse particles within the planetary boundary layer, aerosol optical thickness, extinction coefficient, integral coefficient of aerosol backscatter, and minimum values of the Ångström exponent. Pollution associated with air masses transported over long distances has different characteristics during autumn, winter, and spring. During winter, dust emissions were low resulting in high values of the Ångström exponent (about 0.51) and the fine particle mass fraction (64%). Dust storms were more frequent during spring with an increase in coarse dust particles in comparison to winter. The aerosol vertical profiles can be used to lower uncertainty in estimating radiative forcing.« less
Home range and use of habitat of western yellow-billed cuckoos on the middle Rio Grande, New Mexico
Sechrist, Juddson; Ahlers, Darrell; Potak Zehfuss, Katherine; Doster, Robert; Paxton, Eben H.; Ryan, Vicky M.
2013-01-01
The western yellow-billed cuckoo (Coccyzus americanus occidentalis) is a Distinct Population Segment that has been proposed for listing under the Endangered Species Act, yet very little is known about its spatial use on the breeding grounds. We implemented a study, using radio telemetry, of home range and use of habitat for breeding cuckoos along the Middle Rio Grande in central New Mexico in 2007 and 2008. Nine of 13 cuckoos were tracked for sufficient time to generate estimates of home range. Overall size of home ranges for the 2 years was 91 ha for a minimum-convex-polygon estimate and 62 ha for a 95%-kernel-home-range estimate. Home ranges varied considerably among individuals, highlighting variability in spatial use by cuckoos. Additionally, use of habitat differed between core areas and overall home ranges, but the differences were nonsignificant. Home ranges calculated for western yellow-billed cuckoos on the Middle Rio Grande are larger than those in other southwestern riparian areas. Based on calculated home ranges and availability of riparian habitat in the study area, we estimate that the study area is capable of supporting 82-99 nonoverlapping home ranges of cuckoos. Spatial data from this study should contribute to the understanding of the requirements of area and habitat of this species for management of resources and help facilitate recovery if a listing occurs.
Preparation of nanosize polyaniline and its utilization for microwave absorber.
Abbas, S M; Dixit, A K; Chatterjee, R; Goel, T C
2007-06-01
Polyaniline powder in nanosize has been synthesized by chemical oxidative route. XRD, FTIR, and TEM were used to characterize the polyaniline powder. Crytallite size was estimated from XRD profile and also ascertained by TEM in the range of 15 to 20 nm. The composite absorbers have been prepared by mixing different ratios of polyaniline into procured polyurethane (PU) binder. The complex permittivity (epsilon' - jepsilon") and complex permeability (mu' - jmu") were measured in X-band (8.2-12.4 GHz) using Agilent network analyzer (model PNA E8364B) and its software module 85071 (version 'E'). Measured values of these parameters were used to determine the reflection loss at different frequencies and sample thicknesses, based on a model of a single layered plane wave absorber backed by a perfect conductor. An optimized polyaniline/PU ratio of 3:1 has given a minimum reflection loss of -30 dB (99.9% power absorption) at the central frequency 10 GHz and the bandwidth (full width at half minimum) of 4.2 GHz over whole X-band (8.2 to 12.4 GHz) in a sample thickness of 3.0 mm. The prepared composites can be fruitfully utilized for suppression of electromagnetic interference (EMI) and reduction of radar signatures (stealth technology).
Evidence for ultrafast outflows in radio-quiet AGNs - III. Location and energetics
NASA Astrophysics Data System (ADS)
Tombesi, F.; Cappi, M.; Reeves, J. N.; Braito, V.
2012-05-01
Using the results of a previous X-ray photoionization modelling of blueshifted Fe K absorption lines on a sample of 42 local radio-quiet AGNs observed with XMM-Newton, in this Letter we estimate the location and energetics of the associated ultrafast outflows (UFOs). Due to significant uncertainties, we are essentially able to place only lower/upper limits. On average, their location is in the interval ˜0.0003-0.03 pc (˜ 102-104rs) from the central black hole, consistent with what is expected for accretion disc winds/outflows. The mass outflow rates are constrained between ˜0.01 and 1 M⊙ yr-1, corresponding to >rsim5-10 per cent of the accretion rates. The average lower/upper limits on the mechanical power are log? 42.6-44.6 erg s-1. However, the minimum possible value of the ratio between the mechanical power and bolometric luminosity is constrained to be comparable or higher than the minimum required by simulations of feedback induced by winds/outflows. Therefore, this work demonstrates that UFOs are indeed capable to provide a significant contribution to the AGN cosmological feedback, in agreement with theoretical expectations and the recent observation of interactions between AGN outflows and the interstellar medium in several Seyfert galaxies.
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Chylek, Petr; Augustine, John A.; Klett, James D.; ...
2017-09-30
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
Cross-scale modeling of surface temperature and tree seedling establishment inmountain landscapes
Dingman, John; Sweet, Lynn C.; McCullough, Ian M.; Davis, Frank W.; Flint, Alan L.; Franklin, Janet; Flint, Lorraine E.
2013-01-01
Abstract: Introduction: Estimating surface temperature from above-ground field measurements is important for understanding the complex landscape patterns of plant seedling survival and establishment, processes which occur at heights of only several centimeters. Currently, future climate models predict temperature at 2 m above ground, leaving ground-surface microclimate not well characterized. Methods: Using a network of field temperature sensors and climate models, a ground-surface temperature method was used to estimate microclimate variability of minimum and maximum temperature. Temperature lapse rates were derived from field temperature sensors and distributed across the landscape capturing differences in solar radiation and cold air drainages modeled at a 30-m spatial resolution. Results: The surface temperature estimation method used for this analysis successfully estimated minimum surface temperatures on north-facing, south-facing, valley, and ridgeline topographic settings, and when compared to measured temperatures yielded an R2 of 0.88, 0.80, 0.88, and 0.80, respectively. Maximum surface temperatures generally had slightly more spatial variability than minimum surface temperatures, resulting in R2 values of 0.86, 0.77, 0.72, and 0.79 for north-facing, south-facing, valley, and ridgeline topographic settings. Quasi-Poisson regressions predicting recruitment of Quercus kelloggii (black oak) seedlings from temperature variables were significantly improved using these estimates of surface temperature compared to air temperature modeled at 2 m. Conclusion: Predicting minimum and maximum ground-surface temperatures using a downscaled climate model coupled with temperature lapse rates estimated from field measurements provides a method for modeling temperature effects on plant recruitment. Such methods could be applied to improve projections of species’ range shifts under climate change. Areas of complex topography can provide intricate microclimates that may allow species to redistribute locally as climate changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chylek, Petr; Augustine, John A.; Klett, James D.
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
McKee, Paul W.; Clark, Brian R.; Czarnecki, John B.
2004-01-01
Conjunctive-use optimization modeling was done to assist water managers and planners by estimating the maximum amount of ground water that hypothetically could be withdrawn from wells within the Sparta aquifer indefinitely without violating hydraulic-head or stream-discharge constraints. The Sparta aquifer is largely a confined aquifer of regional importance that comprises a sequence of unconsolidated sand units that are contained within the Sparta Sand. In 2000, more than 35.4 million cubic feet per day (Mft3/d) of water were withdrawn from the aquifer by more than 900 wells, primarily for industry, municipal supply, and crop irrigation in Arkansas. Continued, heavy withdrawals from the aquifer have caused several large cones of depression, lowering hydraulic heads below the top of the Sparta Sand in parts of Union and Columbia Counties and several areas in north-central Louisiana. Problems related to overdraft in the Sparta aquifer can result in increased drilling and pumping costs, reduced well yields, and degraded water quality in areas of large drawdown. A finite-difference ground-water flow model was developed for the Sparta aquifer using MODFLOW, primarily in eastern and southeastern Arkansas and north-central Louisiana. Observed aquifer conditions in 1997 supported by numerical simulations of ground-water flow show that continued pumping at withdrawal rates representative of 1990 - 1997 rates cannot be sustained indefinitely without causing hydraulic heads to drop substantially below the top of the Sparta Sand in southern Arkansas and north-central Louisiana. Areas of ground-water levels below the top of the Sparta Sand have been designated as Critical Ground-Water Areas by the State of Arkansas. A steady-state conjunctive-use optimization model was developed to simulate optimized surface-water and ground-water withdrawals while maintaining hydraulic-head and streamflow constraints, thus determining the 'sustainable yield' for the aquifer. Initial attempts to estimate sustainable yield using simulated 1997 hydraulic heads as initial heads in Scenario 1 and 100 percent of the baseline 1990-1997 withdrawal rate as the lower specified limit in Scenario 2 led to infeasible results. Sustainable yield was estimated successfully for scenario 3 with three variations on the upper limit of withdrawal rates. Additionally, ground-water withdrawals in Union County were fixed at 35.6 percent of the baseline 1990-1997 withdrawal rate in Scenario 3. These fixed withdrawals are recognized by the Arkansas Soil and Water Conservation Commission to be sustainable as determined in a previous study. The optimized solutions maintained hydraulic heads at or above the top of the Sparta Sand (except in the outcrop areas where unconfined conditions occur) and streamflow within the outcrop areas was maintained at or above minimum levels. Scenario 3 used limits of 100, 150, and 200 percent of baseline 1990-1997 withdrawal rates for the upper specified limit on 1,119 withdrawal decision variables (managed wells) resulting in estimated sustainable yields ranging from 11.6 to 13.2 Mft3/d in Arkansas and 0.3 to 0.5 Mft3/d in Louisiana. Assuming the total 2 Conjunctive-Use Optimization Model and Sustainable-Yield Estimation for the Sparta Aquifer of Southeastern Arkansas and North-Central Louisiana water demand is equal to the baseline 1990-1997 withdrawal rates, the sustainable yields estimated from the three scenarios only provide 52 to 59 percent of the total ground-water demand for Arkansas; the remainder is defined as unmet demand that could be obtained from large, sustainable surface-water withdrawals.
The SME gauge sector with minimum length
NASA Astrophysics Data System (ADS)
Belich, H.; Louzada, H. L. C.
2017-12-01
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.
Lin, Yu-Kai; Wang, Yu-Chun; Lin, Pay-Liam; Li, Ming-Hsu; Ho, Tsung-Jung
2013-09-01
This study aimed to identify optimal cold-temperature indices that are associated with the elevated risks of mortality from, and outpatient visits for all causes and cardiopulmonary diseases during the cold seasons (November to April) from 2000 to 2008 in Northern, Central and Southern Taiwan. Eight cold-temperature indices, average, maximum, and minimum temperatures, and the temperature humidity index, wind chill index, apparent temperature, effective temperature (ET), and net effective temperature and their standardized Z scores were applied to distributed lag non-linear models. Index-specific cumulative 26-day (lag 0-25) mortality risk, cumulative 8-day (lag 0-7) outpatient visit risk, and their 95% confidence intervals were estimated at 1 and 2 standardized deviations below the median temperature, comparing with the Z score of the lowest risks for mortality and outpatient visits. The average temperature was adequate to evaluate the mortality risk from all causes and circulatory diseases. Excess all-cause mortality increased for 17-24% when average temperature was at Z=-1, and for 27-41% at Z=-2 among study areas. The cold-temperature indices were inconsistent in estimating risk of outpatient visits. Average temperature and THI were appropriate indices for measuring risk for all-cause outpatient visits. Relative risk of all-cause outpatient visits increased slightly by 2-7% when average temperature was at Z=-1, but no significant risk at Z=-2. Minimum temperature estimated the strongest risk associated with outpatient visits of respiratory diseases. In conclusion, the relationships between cold temperatures and health varied among study areas, types of health event, and the cold-temperature indices applied. Mortality from all causes and circulatory diseases and outpatient visits of respiratory diseases has a strong association with cold temperatures in the subtropical island, Taiwan. Copyright © 2013 Elsevier B.V. All rights reserved.
Conklin, Annalijn I; Ponce, Ninez A; Crespi, Catherine M; Frank, John; Nandi, Arijit; Heymann, Jody
2018-04-01
To examine changes in minimum wage associated with changes in women's weight status. Longitudinal study of legislated minimum wage levels (per month, purchasing power parity-adjusted, 2011 constant US dollar values) linked to anthropometric and sociodemographic data from multiple Demographic and Health Surveys (2000-2014). Separate multilevel models estimated associations of a $10 increase in monthly minimum wage with the rate of change in underweight and obesity, conditioning on individual and country confounders. Post-estimation analysis computed predicted mean probabilities of being underweight or obese associated with higher levels of minimum wage at study start and end. Twenty-four low-income countries. Adult non-pregnant women (n 150 796). Higher minimum wages were associated (OR; 95 % CI) with reduced underweight in women (0·986; 0·977, 0·995); a decrease that accelerated over time (P-interaction=0·025). Increasing minimum wage was associated with higher obesity (1·019; 1·008, 1·030), but did not alter the rate of increase in obesity prevalence (P-interaction=0·8). A $10 rise in monthly minimum wage was associated (prevalence difference; 95 % CI) with an average decrease of about 0·14 percentage points (-0·14; -0·23, -0·05) for underweight and an increase of about 0·1 percentage points (0·12; 0·04, 0·20) for obesity. The present longitudinal multi-country study showed that a $10 rise in monthly minimum wage significantly accelerated the decline in women's underweight prevalence, but had no association with the pace of growth in obesity prevalence. Thus, modest rises in minimum wage may be beneficial for addressing the protracted underweight problem in poor countries, especially South Asia and parts of Africa.
Risser, Dennis W.; Gburek, William J.; Folmar, Gordon J.
2005-01-01
This study by the U.S. Geological Survey (USGS), in cooperation with the Agricultural Research Service (ARS), U.S. Department of Agriculture, compared multiple methods for estimating ground-water recharge and base flow (as a proxy for recharge) at sites in east-central Pennsylvania underlain by fractured bedrock and representative of a humid-continental climate. This study was one of several within the USGS Ground-Water Resources Program designed to provide an improved understanding of methods for estimating recharge in the eastern United States. Recharge was estimated on a monthly and annual basis using four methods?(1) unsaturated-zone drainage collected in gravity lysimeters, (2) daily water balance, (3) water-table fluctuations in wells, and (4) equations of Rorabaugh. Base flow was estimated by streamflow-hydrograph separation using the computer programs PART and HYSEP. Estimates of recharge and base flow were compared for an 8-year period (1994-2001) coinciding with operation of the gravity lysimeters at an experimental recharge site (Masser Recharge Site) and a longer 34-year period (1968-2001), for which climate and streamflow data were available on a 2.8-square-mile watershed (WE-38 watershed). Estimates of mean-annual recharge at the Masser Recharge Site and WE-38 watershed for 1994-2001 ranged from 9.9 to 14.0 inches (24 to 33 percent of precipitation). Recharge, in inches, from the various methods was: unsaturated-zone drainage, 12.2; daily water balance, 12.3; Rorabaugh equations with PULSE, 10.2, or RORA, 14.0; and water-table fluctuations, 9.9. Mean-annual base flow from streamflow-hydrograph separation ranged from 9.0 to 11.6 inches (21-28 percent of precipitation). Base flow, in inches, from the various methods was: PART, 10.7; HYSEP Local Minimum, 9.0; HYSEP Sliding Interval, 11.5; and HYSEP Fixed Interval, 11.6. Estimating recharge from multiple methods is useful, but the inherent differences of the methods must be considered when comparing results. For example, although unsaturated-zone drainage from the gravity lysimeters provided the most direct measure of potential recharge, it does not incorporate spatial variability that is contained in watershed-wide estimates of net recharge from the Rorabaugh equations or base flow from streamflow-hydrograph separation. This study showed that water-level fluctuations, in particular, should be used with caution to estimate recharge in low-storage fractured-rock aquifers because of the variability of water-level response among wells and sensitivity of recharge to small errors in estimating specific yield. To bracket the largest range of plausible recharge, results from this study indicate that recharge derived from RORA should be compared with base flow from the Local-Minimum version of HYSEP.
Lithology-dependent minimum horizontal stress and in-situ stress estimate
NASA Astrophysics Data System (ADS)
Zhang, Yushuai; Zhang, Jincai
2017-04-01
Based on the generalized Hooke's law with coupling stresses and pore pressure, the minimum horizontal stress is solved with assumption that the vertical, minimum and maximum horizontal stresses are in equilibrium in the subsurface formations. From this derivation, we find that the uniaxial strain method is the minimum value or lower bound of the minimum stress. Using Anderson's faulting theory and this lower bound of the minimum horizontal stress, the coefficient of friction of the fault is derived. It shows that the coefficient of friction may have a much smaller value than what it is commonly assumed (e.g., μf = 0.6-0.7) for in-situ stress estimate. Using the derived coefficient of friction, an improved stress polygon is drawn, which can reduce the uncertainty of in-situ stress calculation by narrowing the area of the conventional stress polygon. It also shows that the coefficient of friction of the fault is dependent on lithology. For example, if the formation in the fault is composed of weak shales, then the coefficient of friction of the fault may be small (as low as μf = 0.2). This implies that this fault is weaker and more likely to have shear failures than the fault composed of sandstones. To avoid the weak fault from shear sliding, it needs to have a higher minimum stress and a lower shear stress. That is, the critically stressed weak fault maintains a higher minimum stress, which explains why a low shear stress appears in the frictionally weak fault.
Minimum separation distances for natural gas pipeline and boilers in the 300 area, Hanford Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daling, P.M.; Graham, T.M.
1997-08-01
The U.S. Department of Energy (DOE) is proposing actions to reduce energy expenditures and improve energy system reliability at the 300 Area of the Hanford Site. These actions include replacing the centralized heating system with heating units for individual buildings or groups of buildings, constructing a new natural gas distribution system to provide a fuel source for many of these units, and constructing a central control building to operate and maintain the system. The individual heating units will include steam boilers that are to be housed in individual annex buildings located at some distance away from nearby 300 Area nuclearmore » facilities. This analysis develops the basis for siting the package boilers and natural gas distribution systems to be used to supply steam to 300 Area nuclear facilities. The effects of four potential fire and explosion scenarios involving the boiler and natural gas pipeline were quantified to determine minimum separation distances that would reduce the risks to nearby nuclear facilities. The resulting minimum separation distances are shown in Table ES.1.« less
The C(4) plant lineages of planet Earth.
Sage, Rowan F; Christin, Pascal-Antoine; Edwards, Erika J
2011-05-01
Using isotopic screens, phylogenetic assessments, and 45 years of physiological data, it is now possible to identify most of the evolutionary lineages expressing the C(4) photosynthetic pathway. Here, 62 recognizable lineages of C(4) photosynthesis are listed. Thirty-six lineages (60%) occur in the eudicots. Monocots account for 26 lineages, with a minimum of 18 lineages being present in the grass family and six in the sedge family. Species exhibiting the C(3)-C(4) intermediate type of photosynthesis correspond to 21 lineages. Of these, 9 are not immediately associated with any C(4) lineage, indicating that they did not share common C(3)-C(4) ancestors with C(4) species and are instead an independent line. The geographic centre of origin for 47 of the lineages could be estimated. These centres tend to cluster in areas corresponding to what are now arid to semi-arid regions of southwestern North America, south-central South America, central Asia, northeastern and southern Africa, and inland Australia. With 62 independent lineages, C(4) photosynthesis has to be considered one of the most convergent of the complex evolutionary phenomena on planet Earth, and is thus an outstanding system to study the mechanisms of evolutionary adaptation.
Haxel, Joseph H; Dziak, Robert P; Matsumoto, Haru
2013-05-01
A year-long experiment (March 2010 to April 2011) measuring ambient sound at a shallow water site (50 m) on the central OR coast near the Port of Newport provides important baseline information for comparisons with future measurements associated with resource development along the inner continental shelf of the Pacific Northwest. Ambient levels in frequencies affected by surf-generated noise (f < 100 Hz) characterize the site as a high-energy end member within the spectrum of shallow water coastal areas influenced by breaking waves. Dominant sound sources include locally generated ship noise (66% of total hours contain local ship noise), breaking surf, wind induced wave breaking and baleen whale vocalizations. Additionally, an increase in spectral levels for frequencies ranging from 35 to 100 Hz is attributed to noise radiated from distant commercial ship commerce. One-second root mean square (rms) sound pressure level (SPLrms) estimates calculated across the 10-840 Hz frequency band for the entire year long deployment show minimum, mean, and maximum values of 84 dB, 101 dB, and 152 dB re 1 μPa.
Revisiting control establishments for emerging energy hubs
NASA Astrophysics Data System (ADS)
Nasirian, Vahidreza
Emerging small-scale energy systems, i.e., microgrids and smartgrids, rely on centralized controllers for voltage regulation, load sharing, and economic dispatch. However, the central controller is a single-point-of-failure in such a design as either the controller or attached communication links failure can render the entire system inoperable. This work seeks for alternative distributed control structures to improve system reliability and help to the scalability of the system. A cooperative distributed controller is proposed that uses a noise-resilient voltage estimator and handles global voltage regulation and load sharing across a DC microgrid. Distributed adaptive droop control is also investigated as an alternative solution. A droop-free distributed control is offered to handle voltage/frequency regulation and load sharing in AC systems. This solution does not require frequency measurement and, thus, features a fast frequency regulation. Distributed economic dispatch is also studied, where a distributed protocol is designed that controls generation units to merge their incremental costs into a consensus and, thus, push the entire system to generate with the minimum cost. Experimental verifications and Hardware-in-the-Loop (HIL) simulations are used to study efficacy of the proposed control protocols.
An estimate of equatorial wave energy flux at 9- to 90-day periods in the Central Pacific
NASA Technical Reports Server (NTRS)
Eriksen, Charles C.; Richman, James G.
1988-01-01
Deep fluctuations in current along the equator in the Central Pacific are dominated by coherent structures which correspond closely to narrow-band propagating equatorial waves. Currents were measured roughly at 1500 and 3000 m depths at five moorings between 144 and 148 deg W from January 1981 to March 1983, as part of the Pacific Equatorial Ocean Dynamics program. In each frequency band resolved, a single complex empirical orthogonal function accounts for half to three quarters of the observed variance in either zonal or meridional current. Dispersion for equatorial first meridional Rossby and Rossby gravity waves is consistent with the observed vertical-zonal coherence structure. The observations indicate that energy flux is westward and downward in long first meridional mode Rossby waves at periods 45 days and longer, and eastward and downward in short first meridional mode Rossby waves and Rossby-gravity waves at periods 30 days and shorter. A local minimum in energy flux occurs at periods corresponding to a maximum in upper-ocean meridional current energy contributed by tropical instability waves. Total vertical flux across the 9- to 90-day period range is 2.5 kW/m.
Long, Haiming; Zhang, Ji; Tang, Nengyu
2017-01-01
This study considers the effect of an industry's network topology on its systemic risk contribution to the stock market using data from the CSI 300 two-tier industry indices from the Chinese stock market. We first measure industry's conditional-value-at-risk (CoVaR) and the systemic risk contribution (ΔCoVaR) using the fitted time-varying t-copula function. The network of the stock industry is established based on dynamic conditional correlations with the minimum spanning tree. Then, we investigate the connection characteristics and topology of the network. Finally, we utilize seemingly unrelated regression estimation (SUR) of panel data to analyze the relationship between network topology of the stock industry and the industry's systemic risk contribution. The results show that the systemic risk contribution of small-scale industries such as real estate, food and beverage, software services, and durable goods and clothing, is higher than that of large-scale industries, such as banking, insurance and energy. Industries with large betweenness centrality, closeness centrality, and clustering coefficient and small node occupancy layer are associated with greater systemic risk contribution. In addition, further analysis using a threshold model confirms that the results are robust.
NASA Technical Reports Server (NTRS)
1973-01-01
Analyses and design studies were conducted on the technical and economic feasibility of installing the JT8D-109 refan engine on the DC-9 aircraft. Design criteria included minimum change to the airframe to achieve desired acoustic levels. Several acoustic configurations were studied with two selected for detailed investigations. The minimum selected acoustic treatment configuration results in an estimated aircraft weight increase of 608 kg (1,342 lb) and the maximum selected acoustic treatment configuration results in an estimated aircraft weight increase of 809 kg (1,784 lb). The range loss for the minimum and maximum selected acoustic treatment configurations based on long range cruise at 10 668 m (35,000 ft) altitude with a typical payload of 6 804 kg (15,000 lb) amounts to 54 km (86 n. mi.) respectively. Estimated reduction in EPNL's for minimum selected treatment show 8 EPNdB at approach, 12 EPNdB for takeoff with power cutback, 15 EPNdB for takeoff without power cutback and 12 EPNdB for sideline using FAR Part 36. Little difference was estimated in EPNL between minimum and maximum treatments due to reduced performance of maximum treatment. No major technical problems were encountered in the study. The refan concept for the DC-9 appears technically feasible and economically viable at approximately $1,000,000 per airplane. An additional study of the installation of JT3D-9 refan engine on the DC-8-50/61 and DC-8-62/63 aircraft is included. Three levels of acoustic treatment were suggested for DC-8-50/61 and two levels for DC-8-62/63. Results indicate the DC-8 technically can be retrofitted with refan engines for approximately $2,500,000 per airplane.
Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z
2018-05-15
Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.
Binoculars with mil scale as a training aid for estimating form class
H.W. Camp, J.R.; C.A. Bickford
1949-01-01
In an extensive forest inventory, estimates involving personal judgment cannot be eliminated. However, every means should be taken to keep these estimates to a minimum and to provide on-the-job training that is adequate for obtaining the best estimates possible.
NASA Technical Reports Server (NTRS)
Emmons, T. E.
1976-01-01
The results are presented of an investigation of the factors which affect the determination of Spacelab (S/L) minimum interface main dc voltage and available power from the orbiter. The dedicated fuel cell mode of powering the S/L is examined along with the minimum S/L interface voltage and available power using the predicted fuel cell power plant performance curves. The values obtained are slightly lower than current estimates and represent a more marginal operating condition than previously estimated.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Introduction to the special issue on the changing Mojave Desert
Berry, Kristin H.; Murphy, R.W.; Mack, Jeremy S.; Quillman, W.
2006-01-01
The Mojave Desert, which lies between the Great Basin Desert in the north and the Sonoran Desert in the south, covers an estimated 114 478–130 464 km2 of the south-western United States and includes parts of the states of Nevada, Utah, Arizona, and California, with the amount of land mass dependent on the definition (Fig. 1; Rowlands et al., 1982; McNab and Avers, 1994; Bailey, 1995; Groves et al., 2000). This desert is sufficiently diverse to be subdivided into five regions: northern, south-western, central, south-central, and eastern (Rowlands et al., 1982). It is a land of extremes both in topography and climate. Elevations range from below sea level at Death Valley National Park to 3633 m on Mt. Charleston in the Spring Range of Nevada. Temperatures exhibit similar extreme ranges with mean minimum January temperatures of −2.4 °C in Beatty, Nevada and mean maximum July temperatures of 47 °C in Death Valley. Mean annual precipitation varies throughout the regions (42–350 mm), is highest on mountain tops, but overall is low (Rowlands et al., 1982; Rowlands, 1995a). The distribution of precipitation varies from west to east and north to south, with >85% of rain falling in winter in the northern, south-western and south-central regions. In contrast, the central and eastern regions receive a substantial amount of precipitation in both winter and summer. The variability in topographic and climatic features contributes to regional differences in vegetation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, E.R.
1983-09-01
The preliminary design of a solar central receiver repowered gas/oil fired steam-Rankine cycle electric power generation plant was completed. The design is based on a central receiver technology using molten salt (60% NaNO/sub 3/, 40% KNO/sub 3/, by weight) for the heat transport and thermal storage fluid. Unit One of APS's Saguaro power plant located 43 km (27 mi) northwest of Tucson, AZ, is to be repowered. The selection of both the site and the molten salt central receiver promotes a near-term feasibility demonstration and cost-effective power production from an advanced solar thermal technology. The recommended system concept is tomore » repower the existing electric power generating system at the minimum useful level (66 MW/sub e/ gross) using a field of 4850 Martin Marietta second-generation (58.5 m/sup 2/) heliostats and a storage capacity of 4.0 hours. The storage capacity will be used to optimize dispatch of power to the utility system. The preliminary design was based on the use of the systems approach to design where the overall project was divided into systems, each of which is clearly bounded, and performs specific functions. The total project construction cost was estimated to be 213 million in 1983 dollars. The plant will be capable of displacing fossil energy equivalent to 2.4 million barrels of No. 6 oil in its first 10 years of operation.« less
The Einstein-Hilbert gravitation with minimum length
NASA Astrophysics Data System (ADS)
Louzada, H. L. C.
2018-05-01
We study the Einstein-Hilbert gravitation with the deformed Heisenberg algebra leading to the minimum length, with the intention to find and estimate the corrections in this theory, clarifying whether or not it is possible to obtain, by means of the minimum length, a theory, in D=4, which is causal, unitary and provides a massive graviton. Therefore, we will calculate and analyze the dispersion relationships of the considered theory.
Zhao, Jinhui; Martin, Gina; Macdonald, Scott; Vallance, Kate; Treno, Andrew; Ponicki, William; Tu, Andrew; Buxton, Jane
2013-01-01
Objectives. We investigated whether periodic increases in minimum alcohol prices were associated with reduced alcohol-attributable hospital admissions in British Columbia. Methods. The longitudinal panel study (2002–2009) incorporated minimum alcohol prices, density of alcohol outlets, and age- and gender-standardized rates of acute, chronic, and 100% alcohol-attributable admissions. We applied mixed-method regression models to data from 89 geographic areas of British Columbia across 32 time periods, adjusting for spatial and temporal autocorrelation, moving average effects, season, and a range of economic and social variables. Results. A 10% increase in the average minimum price of all alcoholic beverages was associated with an 8.95% decrease in acute alcohol-attributable admissions and a 9.22% reduction in chronic alcohol-attributable admissions 2 years later. A Can$ 0.10 increase in average minimum price would prevent 166 acute admissions in the 1st year and 275 chronic admissions 2 years later. We also estimated significant, though smaller, adverse impacts of increased private liquor store density on hospital admission rates for all types of alcohol-attributable admissions. Conclusions. Significant health benefits were observed when minimum alcohol prices in British Columbia were increased. By contrast, adverse health outcomes were associated with an expansion of private liquor stores. PMID:23597383
Groundwater-level trends in the U.S. glacial aquifer system, 1964-2013
Hodgkins, Glenn A.; Dudley, Robert W.; Nielsen, Martha G.; Renard, Benjamin; Qi, Sharon L.
2017-01-01
The glacial aquifer system in the United States is a major source of water supply but previous work on historical groundwater trends across the system is lacking. Trends in annual minimum, mean, and maximum groundwater levels for 205 monitoring wells were analyzed across three regions of the system (East, Central, West Central) for four time periods: 1964-2013, 1974-2013, 1984-2013, and 1994-2013. Trends were computed separately for wells in the glacial aquifer system with low potential for human influence on groundwater levels and ones with high potential influence from activities such as groundwater pumping. Generally there were more wells with significantly increasing groundwater levels (levels closer to ground surface) than wells with significantly decreasing levels. The highest numbers of significant increases for all four time periods were with annual minimum and/or mean levels. There were many more wells with significant increases from 1964 to 2013 than from more recent periods, consistent with low precipitation in the 1960s. Overall there were low numbers of wells with significantly decreasing trends regardless of time period considered; the highest number of these were generally for annual minimum groundwater levels at wells with likely human influence. There were substantial differences in the number of wells with significant groundwater-level trends over time, depending on whether the historical time series are assumed to be independent, have short-term persistence, or have long-term persistence. Mean annual groundwater levels have significant lag-one-year autocorrelation at 26.0% of wells in the East region, 65.4% of wells in the Central region, and 100% of wells in the West Central region. Annual precipitation across the glacial aquifer system, on the other hand, has significant autocorrelation at only 5.5% of stations, about the percentage expected due to chance.
Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium
Raymond L. Czaplewski
1991-01-01
The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...
NASA Astrophysics Data System (ADS)
Subha Anand, S.; Rengarajan, R.; Sarma, V. V. S. S.; Sudheer, A. K.; Bhushan, R.; Singh, S. K.
2017-05-01
The northern Indian Ocean is globally significant for its seasonally reversing winds, upwelled nutrients, high biological production, and expanding oxygen minimum zones. The region acts as sink and source for atmospheric CO2. However, the efficiency of the biological carbon pump to sequester atmospheric CO2 and export particulate organic carbon from the surface is not well known. To quantify the upper ocean carbon export flux and to estimate the efficiency of biological carbon pump in the Bay of Bengal and the Indian Ocean, seawater profiles of total 234Th were measured from surface to 300 m depth at 13 stations from 19.9°N to 25.3°S in a transect along 87°E, during spring intermonsoon period (March-April 2014). Results showed enhanced in situ primary production in the equatorial Indian Ocean and the central Bay of Bengal and varied from 13.2 to 173.8 mmol C m-2 d-1. POC export flux in this region varied from 0 to 7.7 mmol C m-2 d-1. Though high carbon export flux was found in the equatorial region, remineralization of organic carbon in the surface and subsurface waters considerably reduced organic carbon export in the Bay of Bengal. Annually recurring anticyclonic eddies enhanced organic carbon utilization and heterotrophy. Oxygen minimum zone developed due to stratification and poor ventilation was intensified by subsurface remineralization. 234Th-based carbon export fluxes were not comparable with empirical statistical model estimates based on primary production and temperature. Region-specific refinement of model parameters is required to accurately predict POC export fluxes.
Remote Monitoring of Groundwater Overdraft Using GRACE and InSAR
NASA Astrophysics Data System (ADS)
Scher, C.; Saah, D.
2017-12-01
Gravity Recovery and Climate Experiment (GRACE) data paired with radar-derived analyses of volumetric changes in aquifer storage capacity present a viable technique for remote monitoring of aquifer depletion. Interferometric Synthetic Aperture Radar (InSAR) analyses of ground level subsidence can account for a significant portion of mass loss observed in GRACE data and provide information on point-sources of overdraft. This study summed one water-year of GRACE monthly mass change grids and delineated regions with negative water storage anomalies for further InSAR analyses. Magnitude of water-storage anomalies observed by GRACE were compared to InSAR-derived minimum volumetric changes in aquifer storage capacity as a result of measurable compaction at the surface. Four major aquifers were selected within regions where GRACE observed a net decrease in water storage (Central Valley, California; Mekong Delta, Vietnam; West Bank, occupied Palestinian Territory; and the Indus Basin, South Asia). Interferogram imagery of the extent and magnitude of subsidence within study regions provided estimates for net minimum volume of groundwater extracted between image acquisitions. These volumetric estimates were compared to GRACE mass change grids to resolve a percent contribution of mass change observed by GRACE likely due to groundwater overdraft. Interferograms revealed characteristic cones of depression within regions of net mass loss observed by GRACE, suggesting point-source locations of groundwater overdraft and demonstrating forensic potential for the use of InSAR and GRACE data in remote monitoring of aquifer depletion. Paired GRACE and InSAR analyses offer a technique to increase the spatial and temporal resolution of remote applications for monitoring groundwater overdraft in addition to providing a novel parameter - measurable vertical deformation at the surface - to global groundwater models.
White-tailed deer migration and its role in wolf predation
Hoskinson, R.L.; Mech, L.D.
1976-01-01
Seventeen white-tailed deer (Odocoileus virginianus) were radio-tagged in winter yards and tracked for up to 17 months each (881 locations) from January 1973 through August 1974 in the central Superior National Forest of NE Minnesota following a drastic decline in deer numbers. Ten vyolves (Canis lupus) from 7 packs in the same area were radiotracked before and/or during the same period (703 locations). Deer had winter ranges averaging 26.4 ha. Spring migration took place from 26 March to 23 April and was related to loss of snow cover. Deer generally migrated ENE in straight-line distances of 10.0 to 38.0 km to summer ranges. Two fawns did not migrate. Arrival on summer ranges was between 19 April and 18 May, and summer ranges varied from 48.1 to 410.4 ha. Migration back to the same winter yards took place in early December, coincident with snow accumulation and low temperatures. Social grouping appeared strongest during migration and winter yarding. Survival of the radio-tagged deer was studied through 1 May 1975. Four deer were killed by wolves, one was poached, and one drowned. Mean age of the captured deer was 5.4 years and estimated minimum survival after capture was 2.6 years, giving an estimated total minimum survival of 8.0 years. This unusually high survival rate appeared to be related to the fact that both winter and summer ranges of these deer were situated along wolf-pack territory edges rather than in centers. In addition, most summer ranges of the radio-tagged deer were along major waterways where the deer could escape wolves.
Spectral factorization of wavefields and wave operators
NASA Astrophysics Data System (ADS)
Rickett, James Edward
Spectral factorization is the problem of finding a minimum-phase function with a given power spectrum. Minimum phase functions have the property that they are causal with a causal (stable) inverse. In this thesis, I factor multidimensional systems into their minimum-phase components. Helical boundary conditions resolve any ambiguities over causality, allowing me to factor multi-dimensional systems with conventional one-dimensional spectral factorization algorithms. In the first part, I factor passive seismic wavefields recorded in two-dimensional spatial arrays. The result provides an estimate of the acoustic impulse response of the medium that has higher bandwidth than autocorrelation-derived estimates. Also, the function's minimum-phase nature mimics the physics of the system better than the zero-phase autocorrelation model. I demonstrate this on helioseismic data recorded by the satellite-based Michelson Doppler Imager (MDI) instrument, and shallow seismic data recorded at Long Beach, California. In the second part of this thesis, I take advantage of the stable-inverse property of minimum-phase functions to solve wave-equation partial differential equations. By factoring multi-dimensional finite-difference stencils into minimum-phase components, I can invert them efficiently, facilitating rapid implicit extrapolation without the azimuthal anisotropy that is observed with splitting approximations. The final part of this thesis describes how to calculate diagonal weighting functions that approximate the combined operation of seismic modeling and migration. These weighting functions capture the effects of irregular subsurface illumination, which can be the result of either the surface-recording geometry, or focusing and defocusing of the seismic wavefield as it propagates through the earth. Since they are diagonal, they can be easily both factored and inverted to compensate for uneven subsurface illumination in migrated images. Experimental results show that applying these weighting functions after migration leads to significantly improved estimates of seismic reflectivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas
We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a Metagenome-Assembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Gene Sequencemore » (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas; ...
2017-08-08
Here, we present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a MetagenomeAssembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Genemore » Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas
Here, we present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a MetagenomeAssembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Genemore » Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less
Local amplification of storm surge by Super Typhoon Haiyan in Leyte Gulf.
Mori, Nobuhito; Kato, Masaya; Kim, Sooyoul; Mase, Hajime; Shibutani, Yoko; Takemi, Tetsuya; Tsuboki, Kazuhisa; Yasuda, Tomohiro
2014-07-28
Typhoon Haiyan, which struck the Philippines in November 2013, was an extremely intense tropical cyclone that had a catastrophic impact. The minimum central pressure of Typhoon Haiyan was 895 hPa, making it the strongest typhoon to make landfall on a major island in the western North Pacific Ocean. The characteristics of Typhoon Haiyan and its related storm surge are estimated by numerical experiments using numerical weather prediction models and a storm surge model. Based on the analysis of best hindcast results, the storm surge level was 5-6 m and local amplification of water surface elevation due to seiche was found to be significant inside Leyte Gulf. The numerical experiments show the coherent structure of the storm surge profile due to the specific bathymetry of Leyte Gulf and the Philippines Trench as a major contributor to the disaster in Tacloban. The numerical results also indicated the sensitivity of storm surge forecast.
RTOS kernel in portable electrocardiograph
NASA Astrophysics Data System (ADS)
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
Energy Corner: Heat Reclamation Rescues Wasted Heat.
ERIC Educational Resources Information Center
Daugherty, Thomas
1982-01-01
Heat reclamation systems added to pre-existing central heating systems provide maximum savings at minimum cost. The benefits of a particular appliance marketed under the brand name "Energizer" are discussed. (Author/MLF)
Robust linear discriminant models to solve financial crisis in banking sectors
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni
2014-12-01
Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
NASA Astrophysics Data System (ADS)
Yang, Liusha; Couillet, Romain; McKay, Matthew R.
2015-12-01
We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.
The minimum follow-up required for radial head arthroplasty: a meta-analysis.
Laumonerie, P; Reina, N; Kerezoudis, P; Declaux, S; Tibbo, M E; Bonnevialle, N; Mansat, P
2017-12-01
The primary aim of this study was to define the standard minimum follow-up required to produce a reliable estimate of the rate of re-operation after radial head arthroplasty (RHA). The secondary objective was to define the leading reasons for re-operation. Four electronic databases, between January 2000 and March 2017 were searched. Articles reporting reasons for re-operation (Group I) and results (Group II) after RHA were included. In Group I, a meta-analysis was performed to obtain the standard minimum follow-up, the mean time to re-operation and the reason for failure. In Group II, the minimum follow-up for each study was compared with the standard minimum follow-up. A total of 40 studies were analysed: three were Group I and included 80 implants and 37 were Group II and included 1192 implants. In Group I, the mean time to re-operation was 1.37 years (0 to 11.25), the standard minimum follow-up was 3.25 years; painful loosening was the main indication for re-operation. In Group II, 33 Group II articles (89.2%) reported a minimum follow-up of < 3.25 years. The literature does not provide a reliable estimate of the rate of re-operation after RHA. The reproducibility of results would be improved by using a minimum follow-up of three years combined with a consensus of the definition of the reasons for failure after RHA. Cite this article: Bone Joint J 2017;99-B:1561-70. ©2017 The British Editorial Society of Bone & Joint Surgery.
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata
Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.
2012-01-01
Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).
NASA Technical Reports Server (NTRS)
Shen, Suhung; Leptoukh, Gregory G.; Gerasimov, Irina
2010-01-01
Surface air temperature is a critical variable to describe the energy and water cycle of the Earth-atmosphere system and is a key input element for hydrology and land surface models. It is a very important variable in agricultural applications and climate change studies. This is a preliminary study to examine statistical relationships between ground meteorological station measured surface daily maximum/minimum air temperature and satellite remotely sensed land surface temperature from MODIS over the dry and semiarid regions of northern China. Studies were conducted for both MODIS-Terra and MODIS-Aqua by using year 2009 data. Results indicate that the relationships between surface air temperature and remotely sensed land surface temperature are statistically significant. The relationships between the maximum air temperature and daytime land surface temperature depends significantly on land surface types and vegetation index, but the minimum air temperature and nighttime land surface temperature has little dependence on the surface conditions. Based on linear regression relationship between surface air temperature and MODIS land surface temperature, surface maximum and minimum air temperatures are estimated from 1km MODIS land surface temperature under clear sky conditions. The statistical errors (sigma) of the estimated daily maximum (minimum) air temperature is about 3.8 C(3.7 C).
Low Streamflow Forcasting using Minimum Relative Entropy
NASA Astrophysics Data System (ADS)
Cui, H.; Singh, V. P.
2013-12-01
Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.
ERIC Educational Resources Information Center
Cobb, P. G. W.
1973-01-01
Summarizes the type of work carried out by forensic chemists and the minimum qualification needed for appointment. Indicates that there are eight Home Office regional forensic science laboratories in addition to the Central Research Establishment at Aldermaston. (CC)
Meik, Jesse M; Makowsky, Robert
2018-01-01
We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage.
Cadena, Brian C
2014-03-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants' location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents.
The Effect of Minimum Wages on Adolescent Fertility: A Nationwide Analysis.
Bullinger, Lindsey Rose
2017-03-01
To investigate the effect of minimum wage laws on adolescent birth rates in the United States. I used a difference-in-differences approach and vital statistics data measured quarterly at the state level from 2003 to 2014. All models included state covariates, state and quarter-year fixed effects, and state-specific quarter-year nonlinear time trends, which provided plausibly causal estimates of the effect of minimum wage on adolescent birth rates. A $1 increase in minimum wage reduces adolescent birth rates by about 2%. The effects are driven by non-Hispanic White and Hispanic adolescents. Nationwide, increasing minimum wages by $1 would likely result in roughly 5000 fewer adolescent births annually.
Yap, Timothy E; Archer, Timothy J; Gobbe, Marine; Reinstein, Dan Z
2016-02-01
To compare corneal thickness measurements between three imaging systems. In this retrospective study of 81 virgin and 58 post-laser refractive surgery corneas, central and minimum corneal thickness were measured using optical coherence tomography (OCT), very high-frequency digital ultrasound (VHF digital ultrasound), and a Scheimpflug imaging system. Agreement between methods was analyzed using mean differences (bias) (OCT - VHF digital ultrasound, OCT - Scheimpflug, VHF digital ultrasound - Scheimpflug) and Bland-Altman analysis with 95% limits of agreement (LoA). Virgin cornea mean central corneal thickness was 508.3 ± 33.2 µm (range: 434 to 588 µm) for OCT, 512.7 ± 32.2 µm (range: 440 to 587 µm) for VHF digital ultrasound, and 530.2 ± 32.6 µm (range: 463 to 612 µm) for Scheimpflug imaging. OCT and VHF digital ultrasound showed the closest agreement with a bias of -4.37 µm, 95% LoA ±12.6 µm. Least agreement was between OCT and Scheimpflug imaging with a bias of -21.9 µm, 95% LoA ±20.7 µm. Bias between VHF digital ultrasound and Scheimpflug imaging was -17.5 µm, 95% LoA ±19.0 µm. In post-laser refractive surgery corneas, mean central corneal thickness was 417.9 ± 47.1 µm (range: 342 to 557 µm) for OCT, 426.3 ± 47.1 µm (range: 363 to 563 µm) for VHF digital ultrasound, and 437.0 ± 48.5 µm (range: 359 to 571 µm) for Scheimpflug imaging. Closest agreement was between OCT and VHF digital ultrasound with a bias of -8.45 µm, 95% LoA ±13.2 µm. Least agreement was between OCT and Scheimpflug imaging with a bias of -19.2 µm, 95% LoA ±19.2 µm. Bias between VHF digital ultrasound and Scheimpflug imaging was -10.7 µm, 95% LoA ±20.0 µm. No relationship was observed between difference in central corneal thickness measurements and mean central corneal thickness. Results were similar for minimum corneal thickness. Central and minimum corneal thickness was measured thinnest by OCT and thickest by Scheimpflug imaging in both groups. A clinically significant bias existed between Scheimpflug imaging and the other two modalities. Copyright 2016, SLACK Incorporated.
Geology and assessment of undiscovered oil and gas resources of the Yukon Flats Basin Province, 2008
Bird, Kenneth J.; Stanley, Richard G.; Moore, Thomas E.; Gautier, Donald L.
2017-12-22
The hydrocarbon potential of the Yukon Flats Basin Province in Central Alaska was assessed in 2004 as part of an update to the National Oil and Gas Assessment. Three assessment units (AUs) were identified and assessed using a methodology somewhat different than that of the 2008 Circum-Arctic Resource Appraisal (CARA). An important difference in the methodology of the two assessments is that the 2004 assessment specified a minimum accumulation size of 0.5 million barrels of oil equivalent (MMBOE), whereas the 2008 CARA assessment specified a minimum size of 50 MMBOE. The 2004 assessment concluded that >95 percent of the estimated mean undiscovered oil and gas resources occur in a single AU, the Tertiary Sandstone AU. This is also the only AU of the three that extends north of the Arctic Circle.For the CARA project, the number of oil and gas accumulations in the 2004 assessment of the Tertiary Sandstone AU was re-evaluated in terms of the >50-MMBOE minimum accumulation size. By this analysis, and assuming the resource to be evenly distributed across the AU, 0.23 oil fields and 1.20 gas fields larger than 50 MMBOE are expected in the part of the AU north of the Arctic Circle. The geology suggests, however, that the area north of the Arctic Circle has a lower potential for oil and gas accumulations than the area to the south where the sedimentary section is thicker, larger volumes of hydrocarbons may have been generated, and potential structural traps are probably more abundant. Because of the low potential implied for the area of the AU north of the Arctic Circle, the Yukon Flats Tertiary Sandstone AU was not quantitatively assessed for the 2008 CARA.
Updating estimates of low streamflow statistics to account for possible trends
NASA Astrophysics Data System (ADS)
Blum, A. G.; Archfield, S. A.; Hirsch, R. M.; Vogel, R. M.; Kiang, J. E.; Dudley, R. W.
2017-12-01
Given evidence of both increasing and decreasing trends in low flows in many streams, methods are needed to update estimators of low flow statistics used in water resources management. One such metric is the 10-year annual low-flow statistic (7Q10) calculated as the annual minimum seven-day streamflow which is exceeded in nine out of ten years on average. Historical streamflow records may not be representative of current conditions at a site if environmental conditions are changing. We present a new approach to frequency estimation under nonstationary conditions that applies a stationary nonparametric quantile estimator to a subset of the annual minimum flow record. Monte Carlo simulation experiments were used to evaluate this approach across a range of trend and no trend scenarios. Relative to the standard practice of using the entire available streamflow record, use of a nonparametric quantile estimator combined with selection of the most recent 30 or 50 years for 7Q10 estimation were found to improve accuracy and reduce bias. Benefits of data subset selection approaches were greater for higher magnitude trends annual minimum flow records with lower coefficients of variation. A nonparametric trend test approach for subset selection did not significantly improve upon always selecting the last 30 years of record. At 174 stream gages in the Chesapeake Bay region, 7Q10 estimators based on the most recent 30 years of flow record were compared to estimators based on the entire period of record. Given the availability of long records of low streamflow, using only a subset of the flow record ( 30 years) can be used to update 7Q10 estimators to better reflect current streamflow conditions.
NASA Astrophysics Data System (ADS)
Engel, Zbyněk; Mentlík, Pavel; Braucher, Régis; Křížek, Marek; Pluháčková, Markéta; Arnold, Maurice; Aumaître, Georges; Bourlès, Didier; Keddadouche, Karim; Aster Team; Arnold, Maurice; Aumaître, Georges; Bourlès, Didier; Keddadouche, Karim
2017-09-01
10Be exposure ages from moraines and bedrock sites in the Roháčská Valley provide chronology of the last glaciation in the largest valley of the Western Tatra Mts., the Western Carpathians. The minimum apparent exposure age of 19.4 ± 2.1 ka obtained for the oldest sampled boulder and the mean age of 18.0 ± 0.8 ka calculated for the terminal moraine indicate that the oldest preserved moraine was probably deposited at the time of the global Last Glacial Maximum (LGM). The age of this moraine coincides with the termination of the maximum glacier expansion in other central European ranges, including the adjacent High Tatra Mts. and the Alps. The equilibrium line altitude (ELA) of the LGM glacier in the Roháčská Valley, estimated at 1400-1410 m a.s.l., was 50-80 m lower than in the eastern part of the range, indicating a positive ELA gradient from west to east among the north-facing glaciers in the Tatra Mts. Lateglacial glacier expansion occurred no later than 13.4 ± 0.5 ka and 11.9 ± 0.5 ka, as indicated by the mean exposure ages calculated for re-advance moraines. This timing is consistent with the exposure age chronology of the last Lateglacial re-advance in the High Tatra Mts., Alps and lower mountain ranges in central Europe. The ELA in the Roháčská Valley estimated at 1690-1770 m a.s.l. in this period was located 130-300 m lower than in the north-facing valleys in the High Tatra Mts. 10Be exposure ages obtained for a rock glacier constrains the timing of this landform stabilization in the Salatínska Valley and provides the first chronological evidence for the Lateglacial activity of rock glaciers in the Carpathians.
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyson, Jon
2009-06-15
Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Time-Series Evidence of the Effect of the Minimum Wage on Youth Employment and Unemployment.
ERIC Educational Resources Information Center
Brown, Charles; And Others
1983-01-01
The study finds that a 10 percent increase in the federal minimum wage (or the coverage rate) would reduce teenage (16-19) employment by about one percent, which is at the lower end of the range of estimates from previous studies. (Author/SSH)
Swain, Eric D.; Gomez-Fragoso, Julieta; Torres-Gonzalez, Sigfredo
2017-01-01
Lago Loíza reservoir in east-central Puerto Rico is one of the primary sources of public water supply for the San Juan metropolitan area. To evaluate and predict the Lago Loíza water budget, an artificial neural network (ANN) technique is trained to predict river inflows. A method is developed to combine ANN-predicted daily flows with ANN-predicted 30-day cumulative flows to improve flow estimates. The ANN application trains well for representing 2007–2012 and the drier 1994–1997 periods. Rainfall data downscaled from global circulation model (GCM) simulations are used to predict 2050–2055 conditions. Evapotranspiration is estimated with the Hargreaves equation using minimum and maximum air temperatures from the downscaled GCM data. These simulated 2050–2055 river flows are input to a water budget formulation for the Lago Loíza reservoir for comparison with 2007–2012. The ANN scenarios require far less computational effort than a numerical model application, yet produce results with sufficient accuracy to evaluate and compare hydrologic scenarios. This hydrologic tool will be useful for future evaluations of the Lago Loíza reservoir and water supply to the San Juan metropolitan area.
NASA Astrophysics Data System (ADS)
Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai
2016-07-01
Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.
A comparison of minimum distance and maximum likelihood techniques for proportion estimation
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.
1982-01-01
The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.
Estimation of daily minimum land surface air temperature using MODIS data in southern Iran
NASA Astrophysics Data System (ADS)
Didari, Shohreh; Norouzi, Hamidreza; Zand-Parsa, Shahrokh; Khanbilvardi, Reza
2017-11-01
Land surface air temperature (LSAT) is a key variable in agricultural, climatological, hydrological, and environmental studies. Many of their processes are affected by LSAT at about 5 cm from the ground surface (LSAT5cm). Most of the previous studies tried to find statistical models to estimate LSAT at 2 m height (LSAT2m) which is considered as a standardized height, and there is not enough study for LSAT5cm estimation models. Accurate measurements of LSAT5cm are generally acquired from meteorological stations, which are sparse in remote areas. Nonetheless, remote sensing data by providing rather extensive spatial coverage can complement the spatiotemporal shortcomings of meteorological stations. The main objective of this study was to find a statistical model from the previous day to accurately estimate spatial daily minimum LSAT5cm, which is very important in agricultural frost, in Fars province in southern Iran. Land surface temperature (LST) data were obtained using the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Aqua and Terra satellites at daytime and nighttime periods with normalized difference vegetation index (NDVI) data. These data along with geometric temperature and elevation information were used in a stepwise linear model to estimate minimum LSAT5cm during 2003-2011. The results revealed that utilization of MODIS Aqua nighttime data of previous day provides the most applicable and accurate model. According to the validation results, the accuracy of the proposed model was suitable during 2012 (root mean square difference ( RMSD) = 3.07 °C, {R}_{adj}^2 = 87 %). The model underestimated (overestimated) high (low) minimum LSAT5cm. The accuracy of estimation in the winter time was found to be lower than the other seasons ( RMSD = 3.55 °C), and in summer and winter, the errors were larger than in the remaining seasons.
Lenormand, Maxime; Huet, Sylvie; Deffuant, Guillaume
2012-01-01
We use a minimum requirement approach to derive the number of jobs in proximity services per inhabitant in French rural municipalities. We first classify the municipalities according to their time distance in minutes by car to the municipality where the inhabitants go the most frequently to get services (called MFM). For each set corresponding to a range of time distance to MFM, we perform a quantile regression estimating the minimum number of service jobs per inhabitant that we interpret as an estimation of the number of proximity jobs per inhabitant. We observe that the minimum number of service jobs per inhabitant is smaller in small municipalities. Moreover, for municipalities of similar sizes, when the distance to the MFM increases, the number of jobs of proximity services per inhabitant increases.
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage*
Cadena, Brian C.
2014-01-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants’ location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents. PMID:24999288
Evidence for Ultra-Fast Outflows in Radio-Quiet AGNs: III - Location and Energetics
NASA Technical Reports Server (NTRS)
Tombesi, F.; Cappi, M.; Reeves, J. N.; Braito, V.
2012-01-01
Using the results of a previous X-ray photo-ionization modelling of blue-shifted Fe K absorption lines on a sample of 42 local radio-quiet AGNs observed with XMM-Newton, in this letter we estimate the location and energetics of the associated ultrafast outflows (UFOs). Due to significant uncertainties, we are essentially able to place only lower/upper limits. On average, their location is in the interval approx.0.0003-0.03pc (approx.10(exp 2)-10(exp 4)tau(sub s) from the central black hole, consistent with what is expected for accretion disk winds/outflows. The mass outflow rates are constrained between approx.0.01- 1 Stellar Mass/y, corresponding to approx. or >5-10% of the accretion rates. The average lower-upper limits on the mechanical power are logE(sub K) approx. or = 42.6-44.6 erg/s. However, the minimum possible value of the ratio between the mechanical power and bolometric luminosity is constrained to be comparable or higher than the minimum required by simulations of feedback induced by winds/outflows. Therefore, this work demonstrates that UFOs are indeed capable to provide a significant contribution to the AGN r.osmological feedback, in agreement with theoretical expectations and the recent observation of interactions between AGN outflows and the interstellar medium in several Seyferts galaxies .
Davids, Jeffrey C; van de Giesen, Nick; Rutten, Martine
2017-07-01
Hydrologic data has traditionally been collected with permanent installations of sophisticated and accurate but expensive monitoring equipment at limited numbers of sites. Consequently, observation frequency and costs are high, but spatial coverage of the data is limited. Citizen Hydrology can possibly overcome these challenges by leveraging easily scaled mobile technology and local residents to collect hydrologic data at many sites. However, understanding of how decreased observational frequency impacts the accuracy of key streamflow statistics such as minimum flow, maximum flow, and runoff is limited. To evaluate this impact, we randomly selected 50 active United States Geological Survey streamflow gauges in California. We used 7 years of historical 15-min flow data from 2008 to 2014 to develop minimum flow, maximum flow, and runoff values for each gauge. To mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, and their respective distributions, from 50 subsample iterations with four different subsampling frequencies ranging from daily to monthly. Minimum flows were estimated within 10% for half of the subsample iterations at 39 (daily) and 23 (monthly) of the 50 sites. However, maximum flows were estimated within 10% at only 7 (daily) and 0 (monthly) sites. Runoff volumes were estimated within 10% for half of the iterations at 44 (daily) and 12 (monthly) sites. Watershed flashiness most strongly impacted accuracy of minimum flow, maximum flow, and runoff estimates from subsampled data. Depending on the questions being asked, lower frequency Citizen Hydrology observations can provide useful hydrologic information.
Heimes, F.J.; Ferrigno, C.F.; Gutentag, E.D.; Lucky, R.R.; Stephens, D.M.; Weeks, J.B.
1987-01-01
The relation between pumpage and change in storage was evaluated for most of a three-county area in southwestern Nebraska from 1975 through 1983. Initial comparison of the 1975-83 pumpage with change in storage in the study area indicated that the 1 ,042,300 acre-ft of change in storage was only about 30% of the 3,425,000 acre-ft of pumpage. An evaluation of the data used to calculate pumpage and change in storage indicated that there was a relatively large potential for error in estimates of specific yield. As a result, minimum and maximum values of specific yield were estimated and used to recalculate change in storage. Estimates also were derived for the minimum and maximum amounts of recharge that could occur as a result of cultivation practices. The minimum and maximum estimates for specific yield and for recharge from cultivation practices were used to compute a range of values for the potential amount of additional recharge that occurred as a result of irrigation. The minimum and maximum amounts of recharge that could be caused by irrigation in the study area were 953,200 acre-ft (28% of pumpage) and 2,611,200 acre-ft (76% of pumpage), respectively. These values indicate that a substantial percentage of the water pumped from the aquifer is resupplied to storage in the aquifer as a result of a combination of irrigation return flow and enhanced recharge from precipitation that results from cultivation and irrigation practices. (Author 's abstract)
Validation of snow line estimations using MODIS images for the Elqui River basin, Chile
NASA Astrophysics Data System (ADS)
Vasquez, Nicolas; Lagos, Miguel; Vargas, Ximena
2015-04-01
Precipitation events in North-Central Chile are very important because the region has a Mediterranean climate, with a humid period, and an extensive dry one. Separation between solid and liquid precipitation (snow line) in each event is important information that allow to estimate 1) the available snow covered area for snow-melt forecasting, during the dry season (the only resource of water in this period) and 2) the area affected by rain for flood modelling and infrastructure design. In this work, snow line was estimated with a meteorological approach, considering precipitation, temperature, relative humidity and dew point information at a daily scale from 2004 to 2010 and hourly from 2010 to 2013. In both periods, different meteorological stations are considered due to the implementation of new stations in the study area, covering from 1000 to 3000 (m.a.s.l) approximately, with snow and rain meteorological stations. The methodology exposed in this research is based in vertical variation of dew point and temperature due to more stability variations compared to vertical relative humidity behavior. The results calculated from meteorological data are compared with MODIS images, considering three criteria: (1) the median altitude of the minimum specific fractional snow covered area (FSCA), (2) the mean elevation of pixels with a FSCA<10% and (3) the snow line estimation via snow covered area and hypsometric curve. Historically in Chile, snow line has been studied considering few specific precipitation and temperature observations, or estimations of zero isotherms from upper air soundings. A comparison between these estimations and the results validated through MOD10A1/MYD10A1 products was made in order to identify tendencies and/or variations of the snow line at an annually scale.
Proteomic Characterization of Central Pacific Oxygen Minimum Zone Microbial Communities
NASA Astrophysics Data System (ADS)
Saunders, J. K.; McIlvin, M. M.; Moran, D.; Held, N.; Futrelle, J.; Webb, E.; Santoro, A.; Dupont, C.; Saito, M.
2018-05-01
Microbial proteomic profiles are excellent for surveying vast expanses of pelagic ecosystems for links between microbial communities and the biogeochemical cycles they mediate. Data from the ProteOMZ expedition supports the utility of this method.
NASA Astrophysics Data System (ADS)
Bouzaki, Mohammed Moustafa; Chadel, Meriem; Benyoucef, Boumediene; Petit, Pierre; Aillerie, Michel
2016-07-01
This contribution analyzes the energy provided by a solar kit dedicated to autonomous usage and installed in Central Europa (Longitude 6.10°; Latitude 49.21° and Altitude 160 m) by using the simulation software PVSYST. We focused the analysis on the effect of temperature and solar irradiation on the I-V characteristic of a commercial PV panel. We also consider in this study the influence of charging and discharging the battery on the generator efficiency. Meteorological data are integrated into the simulation software. As expected, the solar kit provides an energy varying all along the year with a minimum in December. In the proposed approach, we consider this minimum as the lowest acceptable energy level to satisfy the use. Thus for the other months, a lost in the available renewable energy exists if no storage system is associated.
Relative dynamics and motion control of nanosatellite formation flying
NASA Astrophysics Data System (ADS)
Pimnoo, Ammarin; Hiraki, Koju
2016-04-01
Orbit selection is a necessary factor in nanosatellite formation mission design/meanwhile, to keep the formation, it is necessary to consume fuel. Therefore, the best orbit design for nanosatellite formation flying should be one that requires the minimum fuel consumption. The purpose of this paper is to analyse orbit selection with respect to the minimum fuel consumption, to provide a convenient way to estimate the fuel consumption for keeping nanosatellite formation flying and to present a simplified method of formation control. The formation structure is disturbed by J2 gravitational perturbation and other perturbing accelerations such as atmospheric drag. First, Gauss' Variation Equations (GVE) are used to estimate the essential ΔV due to the J2 perturbation and atmospheric drag. The essential ΔV presents information on which orbit is good with respect to the minimum fuel consumption. Then, the linear equations which account for J2 gravitational perturbation of Schweighart-Sedwick are presented and used to estimate the fuel consumption to maintain the formation structure. Finally, the relative dynamics motion is presented as well as a simplified motion control of formation structure by using GVE.
Stenroos, Matti; Hauk, Olaf
2013-01-01
The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259
[Minimum Standards for the Spatial Accessibility of Primary Care: A Systematic Review].
Voigtländer, S; Deiters, T
2015-12-01
Regional disparities of access to primary care are substantial in Germany, especially in terms of spatial accessibility. However, there is no legally or generally binding minimum standard for the spatial accessibility effort that is still acceptable. Our objective is to analyse existing minimum standards, the methods used as well as their empirical basis. A systematic literature review was undertaken of publications regarding minimum standards for the spatial accessibility of primary care based on a title word and keyword search using PubMed, SSCI/Web of Science, EMBASE and Cochrane Library. 8 minimum standards from the USA, Germany and Austria could be identified. All of them specify the acceptable spatial accessibility effort in terms of travel time; almost half include also distance(s). The travel time maximum, which is acceptable, is 30 min and it tends to be lower in urban areas. Primary care is, according to the identified minimum standards, part of the local area (Nahbereich) of so-called central places (Zentrale Orte) providing basic goods and services. The consideration of means of transport, e. g. public transport, is heterogeneous. The standards are based on empirical studies, consultation with service providers, practical experiences, and regional planning/central place theory as well as on legal or political regulations. The identified minimum standards provide important insights into the effort that is still acceptable regarding spatial accessibility, i. e. travel time, distance and means of transport. It seems reasonable to complement the current planning system for outpatient care, which is based on provider-to-population ratios, by a gravity-model method to identify places as well as populations with insufficient spatial accessibility. Due to a lack of a common minimum standard we propose - subject to further discussion - to begin with a threshold based on the spatial accessibility limit of the local area, i. e. 30 min to the next primary care provider for at least 90% of the regional population. The exceeding of the threshold would necessitate a discussion of a health care deficit and in line with this a potential need for intervention, e. g. in terms of alternative forms of health care provision. © Georg Thieme Verlag KG Stuttgart · New York.
Si, Xingfeng; Kays, Roland
2014-01-01
Camera traps is an important wildlife inventory tool for estimating species diversity at a site. Knowing what minimum trapping effort is needed to detect target species is also important to designing efficient studies, considering both the number of camera locations, and survey length. Here, we take advantage of a two-year camera trapping dataset from a small (24-ha) study plot in Gutianshan National Nature Reserve, eastern China to estimate the minimum trapping effort actually needed to sample the wildlife community. We also evaluated the relative value of adding new camera sites or running cameras for a longer period at one site. The full dataset includes 1727 independent photographs captured during 13,824 camera days, documenting 10 resident terrestrial species of birds and mammals. Our rarefaction analysis shows that a minimum of 931 camera days would be needed to detect the resident species sufficiently in the plot, and c. 8700 camera days to detect all 10 resident species. In terms of detecting a diversity of species, the optimal sampling period for one camera site was c. 40, or long enough to record about 20 independent photographs. Our analysis of evaluating the increasing number of additional camera sites shows that rotating cameras to new sites would be more efficient for measuring species richness than leaving cameras at fewer sites for a longer period. PMID:24868493
Exploratory Factor Analysis with Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.
2009-01-01
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…
The size of the irregular migrant population in the European Union – counting the uncountable?
Vogel, Dita; Kovacheva, Vesela; Prescott, Hannah
2011-01-01
It is difficult to estimate the size of the irregular migrant population in a specific city or country, and even more difficult to arrive at estimates at the European level. A review of past attempts at European-level estimates reveals that they rely on rough and outdated rules-of-thumb. In this paper, we present our own European level estimates for 2002, 2005, and 2008. We aggregate country-specific information, aiming at approximate comparability by consistent use of minimum and maximum estimates and by adjusting for obvious differences in definition and timescale. While the aggregated estimates are not considered highly reliable, they do -- for the first time -- provide transparency. The provision of more systematic medium quality estimates is shown to be the most promising way for improvement. The presented estimate indicates a minimum of 1.9 million and a maximum of 3.8 million irregular foreign residents in the 27 member states of the European Union (2008). Unlike rules-of-thumb, the aggregated EU estimates indicate a decline in the number of irregular foreign residents between 2002 and 2008. This decline has been influenced by the EU enlargement and legalisation programmes.
Long, Haiming; Tang, Nengyu
2017-01-01
This study considers the effect of an industry’s network topology on its systemic risk contribution to the stock market using data from the CSI 300 two-tier industry indices from the Chinese stock market. We first measure industry’s conditional-value-at-risk (CoVaR) and the systemic risk contribution (ΔCoVaR) using the fitted time-varying t-copula function. The network of the stock industry is established based on dynamic conditional correlations with the minimum spanning tree. Then, we investigate the connection characteristics and topology of the network. Finally, we utilize seemingly unrelated regression estimation (SUR) of panel data to analyze the relationship between network topology of the stock industry and the industry’s systemic risk contribution. The results show that the systemic risk contribution of small-scale industries such as real estate, food and beverage, software services, and durable goods and clothing, is higher than that of large-scale industries, such as banking, insurance and energy. Industries with large betweenness centrality, closeness centrality, and clustering coefficient and small node occupancy layer are associated with greater systemic risk contribution. In addition, further analysis using a threshold model confirms that the results are robust. PMID:28683130
Lappi, T.; Venugopalan, R.; Mantysaari, H.
2015-02-25
We argue that the proton multiplicities measured in Roman pot detectors at an electron ion collider can be used to determine centrality classes in incoherent diffractive scattering. Incoherent diffraction probes the fluctuations in the interaction strengths of multi-parton Fock states in the nuclear wavefunctions. In particular, the saturation scale that characterizes this multi-parton dynamics is significantly larger in central events relative to minimum bias events. As an application, we examine the centrality dependence of incoherent diffractive vector meson production. We identify an observable which is simultaneously very sensitive to centrality triggered parton fluctuations and insensitive to details of the model.
NASA Astrophysics Data System (ADS)
Sumantari, Y. D.; Slamet, I.; Sugiyanto
2017-06-01
Semiparametric regression is a statistical analysis method that consists of parametric and nonparametric regression. There are various approach techniques in nonparametric regression. One of the approach techniques is spline. Central Java is one of the most densely populated province in Indonesia. Population density in this province can be modeled by semiparametric regression because it consists of parametric and nonparametric component. Therefore, the purpose of this paper is to determine the factors that in uence population density in Central Java using the semiparametric spline regression model. The result shows that the factors which in uence population density in Central Java is Family Planning (FP) active participants and district minimum wage.
Behavioral and physiological significance of minimum resting metabolic rate in king penguins.
Halsey, L G; Butler, P J; Fahlman, A; Woakes, A J; Handrich, Y
2008-01-01
Because fasting king penguins (Aptenodytes patagonicus) need to conserve energy, it is possible that they exhibit particularly low metabolic rates during periods of rest. We investigated the behavioral and physiological aspects of periods of minimum metabolic rate in king penguins under different circumstances. Heart rate (f(H)) measurements were recorded to estimate rate of oxygen consumption during periods of rest. Furthermore, apparent respiratory sinus arrhythmia (RSA) was calculated from the f(H) data to determine probable breathing frequency in resting penguins. The most pertinent results were that minimum f(H) achieved (over 5 min) was higher during respirometry experiments in air than during periods ashore in the field; that minimum f(H) during respirometry experiments on water was similar to that while at sea; and that RSA was apparent in many of the f(H) traces during periods of minimum f(H) and provides accurate estimates of breathing rates of king penguins resting in specific situations in the field. Inferences made from the results include that king penguins do not have the capacity to reduce their metabolism to a particularly low level on land; that they can, however, achieve surprisingly low metabolic rates at sea while resting in cold water; and that during respirometry experiments king penguins are stressed to some degree, exhibiting an elevated metabolism even when resting.
Impacts of SST Patterns on Rapid Intensification of Typhoon Megi (2010)
NASA Astrophysics Data System (ADS)
Kanada, Sachie; Tsujino, Satoki; Aiki, Hidenori; Yoshioka, Mayumi K.; Miyazawa, Yasumasa; Tsuboki, Kazuhisa; Takayabu, Izuru
2017-12-01
Typhoon Megi (2010), a very intense tropical cyclone with a minimum central pressure of 885 hPa, was characterized by especially rapid intensification. We investigated this intensification process by a simulation experiment using a high-resolution (0.02° × 0.02°) three-dimensional atmosphere-ocean coupled regional model. We also performed a sensitivity experiment with a time-fixed sea surface temperature (SST). The coupled model successfully simulated the minimum central pressure of Typhoon Megi, whereas the fixed SST experiment simulated an excessively low minimum central pressure of 839 hPa. The simulation results also showed a close relationship between the radial SST profiles and the rapid intensification process. Because the warm sea increased near-surface water vapor and hence the convective available potential energy, the high SST in the eye region facilitated tall and intense updrafts inside the radius of maximum wind speed and led to the start of rapid intensification. In contrast, high SST outside this radius induced local secondary updrafts that inhibited rapid intensification even if the mean SST in the core region exceeded 29.0°C. These secondary updrafts moved inward and eventually merged with the primary eyewall updrafts. Then the storm intensified rapidly when the high SST appeared in the eye region. Thus, the changes in the local SST pattern around the storm center strongly affected the rapid intensification process by modulating the radial structure of core convection. Our results also show that the use of a high-resolution three-dimensional atmosphere-ocean coupled model offers promise for improving intensity forecasts of tropical cyclones.
NASA Astrophysics Data System (ADS)
Génova, M.
2012-04-01
The study of pointer years of numerous tree-ring chronologies of the central Iberian Peninsula (Sierra de Guadarrama) could provide complementary information about climate variability over the last 405 yr. In total, 64 pointer years have been identified: 30 negative (representing minimum growths) and 34 positive (representing maximum growths), the most significant of these being 1601, 1963 and 1996 for the negative ones, and 1734 and 1737 for the positive ones. Given that summer precipitation was found to be the most limiting factor for the growth of Pinus in the Sierra de Guadarrama in the second half of the 20th century, it is also an explanatory factor in almost 50% of the extreme growths. Furthermore, these pointer years and intervals are not evenly distributed throughout time. Both in the first half of the 17th and in the second half of 20th, they were more frequent and more extreme and these periods are the most notable for the frequency of negative pointer years in Central Spain. The interval 1600-1602 is of special significance, being one of the most unfavourable for tree growth in the centre of Spain, with 1601 representing the minimum index in the regional chronology. We infer that this special minimum annual increase was the effect of the eruption of Huaynaputina, which occurred in Peru at the beginning of 1600 AD. This is the first time that the effects of this eruption in the tree-ring records of Southern Europe have been demonstrated.
K(892)* resonance production in Au+Au and p+p collisions at {radical}s{sub NN} = 200 GeV at RHIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, J.; Aggarwal, M.M.; Ahammed, Z.
2004-12-09
The short-lived K(892)* resonance provides an efficient tool to probe properties of the hot and dense medium produced in relativistic heavy-ion collisions. We report measurements of K* in {radical}s{sub NN} = 200 GeV Au+Au and p+p collisions reconstructed via its hadronic decay channels K(892)*{sup 0} {yields} K{pi} and K(892)*{sup +-} {yields} K{sub S}{sup 0}{pi}{sup +-} using the STAR detector at RHIC. The K*{sup 0} mass has been studied as function of p{sub T} in minimum bias p + p and central Au+Au collisions. The K* p{sub T} spectra for minimum bias p + p interactions and for Au+Au collisions inmore » different centralities are presented. The K*/K ratios for all centralities in Au+Au collisions are found to be significantly lower than the ratio in minimum bias p + p collisions, indicating the importance of hadronic interactions between chemical and kinetic freeze-outs. The nuclear modification factor of K* at intermediate p{sub T} is similar to that of K{sub S}{sup 0}, but different from {Lambda}. This establishes a baryon-meson effect over a mass effect in the particle production at intermediate p{sub T} (2 < p{sub T} {le} 4 GeV/c). A significant non-zero K*{sup 0} elliptic flow (v{sub 2}) is observed in Au+Au collisions and compared to the K{sub S}{sup 0} and {Lambda} v{sub 2}.« less
Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M
2016-01-01
Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.
Xue, Xiaonan; Shore, Roy E; Ye, Xiangyang; Kim, Mimi Y
2004-10-01
Occupational exposures are often recorded as zero when the exposure is below the minimum detection level (BMDL). This can lead to an underestimation of the doses received by individuals and can lead to biased estimates of risk in occupational epidemiologic studies. The extent of the exposure underestimation is increased with the magnitude of the minimum detection level (MDL) and the frequency of monitoring. This paper uses multiple imputation methods to impute values for the missing doses due to BMDL. A Gibbs sampling algorithm is developed to implement the method, which is applied to two distinct scenarios: when dose information is available for each measurement (but BMDL is recorded as zero or some other arbitrary value), or when the dose information available represents the summation of a series of measurements (e.g., only yearly cumulative exposure is available but based on, say, weekly measurements). Then the average of the multiple imputed exposure realizations for each individual is used to obtain an unbiased estimate of the relative risk associated with exposure. Simulation studies are used to evaluate the performance of the estimators. As an illustration, the method is applied to a sample of historical occupational radiation exposure data from the Oak Ridge National Laboratory.
Estimation of transformation parameters for microarray data.
Durbin, Blythe; Rocke, David M
2003-07-22
Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.
Demer, Joseph L.; Clark, Robert A.; Suh, Soh Youn; Giaconi, JoAnn A.; Nouri-Mahdavi, Kouros; Law, Simon K.; Bonelli, Laura; Coleman, Anne L.; Caprioli, Joseph
2017-01-01
Purpose We used magnetic resonance imaging (MRI) to ascertain effects of optic nerve (ON) traction in adduction, a phenomenon proposed as neuropathic in primary open-angle glaucoma (POAG). Methods Seventeen patients with POAG and maximal IOP ≤ 20 mm Hg, and 31 controls underwent MRI in central gaze and 20° to 30° abduction and adduction. Optic nerve and sheath area centroids permitted computation of midorbital lengths versus minimum paths. Results Average mean deviation (±SEM) was −8.2 ± 1.2 dB in the 15 patients with POAG having interpretable perimetry. In central gaze, ON path length in POAG was significantly more redundant (104.5 ± 0.4% of geometric minimum) than in controls (102.9 ± 0.4%, P = 2.96 × 10−4). In both groups the ON became significantly straighter in adduction (28.6 ± 0.8° in POAG, 26.8 ± 1.1° in controls) than central gaze and abduction. In adduction, the ON in POAG straightened to 102.0% ± 0.2% of minimum path length versus 104.5% ± 0.4% in central gaze (P = 5.7 × 10−7), compared with controls who straightened to 101.6% ± 0.1% from 102.9% ± 0.3% in central gaze (P = 8.7 × 10−6); and globes retracted 0.73 ± 0.09 mm in POAG, but only 0.07 ± 0.08 mm in controls (P = 8.8 × 10−7). Both effects were confirmed in age-matched controls, and remained significant after correction for significant effects of age and axial globe length (P = 0.005). Conclusions Although tethering and elongation of ON and sheath are normal in adduction, adduction is associated with abnormally great globe retraction in POAG without elevated IOP. Traction in adduction may cause mechanical overloading of the ON head and peripapillary sclera, thus contributing to or resulting from the optic neuropathy of glaucoma independent of IOP. PMID:28829843
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Sparse EEG/MEG source estimation via a group lasso
Lim, Michael; Ales, Justin M.; Cottereau, Benoit R.; Hastie, Trevor
2017-01-01
Non-invasive recordings of human brain activity through electroencephalography (EEG) or magnetoencelphalography (MEG) are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches. PMID:28604790
Trends in annual minimum exposed snow and ice cover in High Mountain Asia from MODIS
NASA Astrophysics Data System (ADS)
Rittger, Karl; Brodzik, Mary J.; Painter, Thomas H.; Racoviteanu, Adina; Armstrong, Richard; Dozier, Jeff
2016-04-01
Though a relatively short record on climatological scales, data from the Moderate Resolution Imaging Spectroradiometer (MODIS) from 2000-2014 can be used to evaluate changes in the cryosphere and provide a robust baseline for future observations from space. We use the MODIS Snow Covered Area and Grain size (MODSCAG) algorithm, based on spectral mixture analysis, to estimate daily fractional snow and ice cover and the MODICE Persistent Ice (MODICE) algorithm to estimate the annual minimum snow and ice fraction (fSCA) for each year from 2000 to 2014 in High Mountain Asia. We have found that MODSCAG performs better than other algorithms, such as the Normalized Difference Index (NDSI), at detecting snow. We use MODICE because it minimizes false positives (compared to maximum extents), for example, when bright soils or clouds are incorrectly classified as snow, a common problem with optical satellite snow mapping. We analyze changes in area using the annual MODICE maps of minimum snow and ice cover for over 15,000 individual glaciers as defined by the Randolph Glacier Inventory (RGI) Version 5, focusing on the Amu Darya, Syr Darya, Upper Indus, Ganges, and Brahmaputra River basins. For each glacier with an area of at least 1 km2 as defined by RGI, we sum the total minimum snow and ice covered area for each year from 2000 to 2014 and estimate the trends in area loss or gain. We find the largest loss in annual minimum snow and ice extent for 2000-2014 in the Brahmaputra and Ganges with 57% and 40%, respectively, of analyzed glaciers with significant losses (p-value<0.05). In the Upper Indus River basin, we see both gains and losses in minimum snow and ice extent, but more glaciers with losses than gains. Our analysis shows that a smaller proportion of glaciers in the Amu Darya and Syr Darya are experiencing significant changes in minimum snow and ice extent (3.5% and 12.2%), possibly because more of the glaciers in this region are smaller than 1 km2 than in the Indus, Ganges, and Brahmaputra making analysis from MODIS (pixel area ~0.25 km2) difficult. Overall, we see 23% of the glaciers in the 5 river basins with significant trends (in either direction). We relate these changes in area to topography and climate to understand the driving processes related to these changes. In addition to annual minimum snow and ice cover, the MODICE algorithm also provides the date of minimum fSCA for each pixel. To determine whether the surface was snow or ice we use the date of minimum fSCA from MODICE to index daily maps of snow on ice (SOI), or exposed glacier ice (EGI) and systematically derive an equilibrium line altitude (ELA) for each year from 2000-2014. We test this new algorithm in the Upper Indus basin and produce annual estimates of ELA. For the Upper Indus basin we are deriving annual ELAs that range from 5350 m to 5450 m which is slightly higher than published values of 5200 m for this region.
LL and E awarded E and D contract area in eastern Algeria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-12-07
This paper reports that a Louisiana Land and Exploration Co. unit has been awarded on exploration and production contract in Algeria by state oil company Enterprise Nationale Sonatrach. LL and E Algeria Ltd.'s contract covers two blocks in the central Ghadames oil basin of eastern Algeria. LL and E said the contract, yet to be submitted for government approval, calls for a minimum investment of $33 million during a 5 year work program that includes seismic acquisition and drilling a minimum of three wildcats.
Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.
2010-01-01
Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauk, F.J.; Christensen, D.H.
1980-09-01
Probabilistic estimations of earthquake detection and location capabilities for the states of Illinois, Indiana, Kentucky, Ohio and West Virginia are presented in this document. The algorithm used in these epicentrality and minimum-magnitude estimations is a version of the program NETWORTH by Wirth, Blandford, and Husted (DARPA Order No. 2551, 1978) which was modified for local array evaluation at the University of Michigan Seismological Observatory. Estimations of earthquake detection capability for the years 1970 and 1980 are presented in four regional minimum m/sub b/ magnitude contour maps. Regional 90% confidence error ellipsoids are included for m/sub b/ magnitude events from 2.0more » through 5.0 at 0.5 m/sub b/ unit increments. The close agreement between these predicted epicentral 90% confidence estimates and the calculated error ellipses associated with actual earthquakes within the studied region suggest that these error determinations can be used to estimate the reliability of epicenter location. 8 refs., 14 figs., 2 tabs.« less
Reassessing Wind Potential Estimates for India: Economic and Policy Implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phadke, Amol; Bharvirkar, Ranjit; Khangura, Jagmeet
2011-09-15
We assess developable on-shore wind potential in India at three different hub-heights and under two sensitivity scenarios – one with no farmland included, the other with all farmland included. Under the “no farmland included” case, the total wind potential in India ranges from 748 GW at 80m hub-height to 976 GW at 120m hub-height. Under the “all farmland included” case, the potential with a minimum capacity factor of 20 percent ranges from 984 GW to 1,549 GW. High quality wind energy sites, at 80m hub-height with a minimum capacity factor of 25 percent, have a potential between 253 GW (nomore » farmland included) and 306 GW (all farmland included). Our estimates are more than 15 times the current official estimate of wind energy potential in India (estimated at 50m hub height) and are about one tenth of the official estimate of the wind energy potential in the US.« less
Sunspot variation and selected associated phenomena: A look at solar cycle 21 and beyond
NASA Technical Reports Server (NTRS)
Wilson, R. M.
1982-01-01
Solar sunspot cycles 8 through 21 are reviewed. Mean time intervals are calculated for maximum to maximum, minimum to minimum, minimum to maximum, and maximum to minimum phases for cycles 8 through 20 and 8 through 21. Simple cosine functions with a period of 132 years are compared to, and found to be representative of, the variation of smoothed sunspot numbers at solar maximum and minimum. A comparison of cycles 20 and 21 is given, leading to a projection for activity levels during the Spacelab 2 era (tentatively, November 1984). A prediction is made for cycle 22. Major flares are observed to peak several months subsequent to the solar maximum during cycle 21 and to be at minimum level several months after the solar minimum. Additional remarks are given for flares, gradual rise and fall radio events and 2800 MHz radio emission. Certain solar activity parameters, especially as they relate to the near term Spacelab 2 time frame are estimated.
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1991-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.
A model for estimating pathogen variability in shellfish and predicting minimum depuration times.
McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick
2018-01-01
Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist norovirus risk managers with future control strategies.
Messias, Leonardo H. D.; Gobatto, Claudio A.; Beck, Wladimir R.; Manchado-Gobatto, Fúlvia B.
2017-01-01
In 1993, Uwe Tegtbur proposed a useful physiological protocol named the lactate minimum test (LMT). This test consists of three distinct phases. Firstly, subjects must perform high intensity efforts to induce hyperlactatemia (phase 1). Subsequently, 8 min of recovery are allowed for transposition of lactate from myocytes (for instance) to the bloodstream (phase 2). Right after the recovery, subjects are submitted to an incremental test until exhaustion (phase 3). The blood lactate concentration is expected to fall during the first stages of the incremental test and as the intensity increases in subsequent stages, to rise again forming a “U” shaped blood lactate kinetic. The minimum point of this curve, named the lactate minimum intensity (LMI), provides an estimation of the intensity that represents the balance between the appearance and clearance of arterial blood lactate, known as the maximal lactate steady state intensity (iMLSS). Furthermore, in addition to the iMLSS estimation, studies have also determined anaerobic parameters (e.g., peak, mean, and minimum force/power) during phase 1 and also the maximum oxygen consumption in phase 3; therefore, the LMT is considered a robust physiological protocol. Although, encouraging reports have been published in both human and animal models, there are still some controversies regarding three main factors: (1) the influence of methodological aspects on the LMT parameters; (2) LMT effectiveness for monitoring training effects; and (3) the LMI as a valid iMLSS estimator. Therefore, the aim of this review is to provide a balanced discussion between scientific evidence of the aforementioned issues, and insights for future investigations are suggested. In summary, further analyses is necessary to determine whether these factors are worthy, since the LMT is relevant in several contexts of health sciences. PMID:28642717
Norris, Michelle; Anderson, Ross; Motl, Robert W; Hayes, Sara; Coote, Susan
2017-03-01
The purpose of this study was to examine the minimum number of days needed to reliably estimate daily step count and energy expenditure (EE), in people with multiple sclerosis (MS) who walked unaided. Seven days of activity monitor data were collected for 26 participants with MS (age=44.5±11.9years; time since diagnosis=6.5±6.2years; Patient Determined Disease Steps=≤3). Mean daily step count and mean daily EE (kcal) were calculated for all combinations of days (127 combinations), and compared to the respective 7-day mean daily step count or mean daily EE using intra-class correlations (ICC), the Generalizability Theory and Bland-Altman. For step count, ICC values of 0.94-0.98 and a G-coefficient of 0.81 indicate a minimum of any random 2-day combination is required to reliably calculate mean daily step count. For EE, ICC values of 0.96-0.99 and a G-coefficient of 0.83 indicate a minimum of any random 4-day combination is required to reliably calculate mean daily EE. For Bland-Altman analyses all combinations of days, bar single day combinations, resulted in a mean bias within ±10%, when expressed as a percentage of the 7-day mean daily step count or mean daily EE. A minimum of 2days for step count and 4days for EE, regardless of day type, is needed to reliably estimate daily step count and daily EE, in people with MS who walk unaided. Copyright © 2017 Elsevier B.V. All rights reserved.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan
2016-05-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.
Flint, L.E.; Flint, A.L.
2008-01-01
Stream temperature is an important component of salmonid habitat and is often above levels suitable for fish survival in the Lower Klamath River in northern California. The objective of this study was to provide boundary conditions for models that are assessing stream temperature on the main stem for the purpose of developing strategies to manage stream conditions using Total Maximum Daily Loads. For model input, hourly stream temperatures for 36 tributaries were estimated for 1 Jan. 2001 through 31 Oct. 2004. A basin-scale approach incorporating spatially distributed energy balance data was used to estimate the stream temperatures with measured air temperature and relative humidity data and simulated solar radiation, including topographic shading and corrections for cloudiness. Regression models were developed on the basis of available stream temperature data to predict temperatures for unmeasured periods of time and for unmeasured streams. The most significant factor in matching measured minimum and maximum stream temperatures was the seasonality of the estimate. Adding minimum and maximum air temperature to the regression model improved the estimate, and air temperature data over the region are available and easily distributed spatially. The addition of simulated solar radiation and vapor saturation deficit to the regression model significantly improved predictions of maximum stream temperature but was not required to predict minimum stream temperature. The average SE in estimated maximum daily stream temperature for the individual basins was 0.9 ?? 0.6??C at the 95% confidence interval. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.
Corneal Epithelium Thickness Profile in 614 Normal Chinese Children Aged 7-15 Years Old.
Ma, Yingyan; He, Xiangui; Zhu, Xiaofeng; Lu, Lina; Zhu, Jianfeng; Zou, Haidong
2016-03-23
The purpose of the study is to describe the values and distribution of corneal epithelium thickness (CET) in normal Chinese school-aged children, and to explore associated factors with CET. CET maps were measured by Fourier-domain optical coherence tomography (FD-OCT) in normal Chinese children aged 7 to 15 years old from two randomly selected schools in Shanghai, China. Children with normal intraocular pressure were further examined for cycloplegic autorefraction, corneal curvature radius (CCR) and axial length. Central (2-mm diameter area), para-central (2- to 5-mm diameter area), and peripheral (5- to 6-mm diameter area) CET in the superior, superotemporal, temporal, inferotemporal, inferior, inferonasal, nasal, superonasal cornea; minimum, maximum, range, and standard deviation of CET within the 5-mm diameter area were recorded. The CET was thinner in the superior than in the inferior and was thinner in the temporal than in the nasal. The maximum CET was located in the inferior zone, and the minimum CET was in the superior zone. A thicker central CET was associated with male gender (p = 0.009) and older age (p = 0.037) but not with CCR (p = 0.061), axial length (p = 0.253), or refraction (p = 0.351) in the multiple regression analyses. CCR, age, and gender were correlated with para-central and peripheral CET.
ERIC Educational Resources Information Center
Rule, David L.
Several regression methods were examined within the framework of weighted structural regression (WSR), comparing their regression weight stability and score estimation accuracy in the presence of outlier contamination. The methods compared are: (1) ordinary least squares; (2) WSR ridge regression; (3) minimum risk regression; (4) minimum risk 2;…
12 CFR Appendix M1 to Part 226 - Repayment Disclosures
Code of Federal Regulations, 2014 CFR
2014-01-01
... a fixed period of time, as set forth by the card issuer. (2) “Deferred interest or similar plan... calculating the minimum payment repayment estimate, card issuers must use the minimum payment formula(s) that... purchases, such as a “club plan purchase.” Also, assume that based on a consumer's balances in these...
12 CFR Appendix M1 to Part 226 - Repayment Disclosures
Code of Federal Regulations, 2013 CFR
2013-01-01
... a fixed period of time, as set forth by the card issuer. (2) “Deferred interest or similar plan... calculating the minimum payment repayment estimate, card issuers must use the minimum payment formula(s) that... purchases, such as a “club plan purchase.” Also, assume that based on a consumer's balances in these...
Fire behavior simulation in Mediterranean forests using the minimum travel time algorithm
Kostas Kalabokidis; Palaiologos Palaiologou; Mark A. Finney
2014-01-01
Recent large wildfires in Greece exemplify the need for pre-fire burn probability assessment and possible landscape fire flow estimation to enhance fire planning and resource allocation. The Minimum Travel Time (MTT) algorithm, incorporated as FlamMap's version five module, provide valuable fire behavior functions, while enabling multi-core utilization for the...
NASA Astrophysics Data System (ADS)
Aad, G.; Abbott, B.; Abdallah, J.; Abdel Khalek, S.; Abdinov, O.; Aben, R.; Abi, B.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Agatonovic-Jovin, T.; Aguilar-Saavedra, J. A.; Agustoni, M.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Alimonti, G.; Alio, L.; Alison, J.; Allbrooke, B. M. M.; Allison, L. J.; Allport, P. P.; Almond, J.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Altheimer, A.; Alvarez Gonzalez, B.; Alviggi, M. G.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Anduaga, X. S.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnal, V.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Auerbach, B.; Augsten, K.; Aurousseau, M.; Avolio, G.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Baas, A. E.; Bacci, C.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Backus Mayes, J.; Badescu, E.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J. T.; Baker, O. K.; Balek, P.; Balli, F.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Bansal, V.; Bansil, H. S.; Barak, L.; Baranov, S. P.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Bartsch, V.; Bassalat, A.; Basye, A.; Bates, R. L.; Batley, J. R.; Battaglia, M.; Battistin, M.; Bauer, F.; Bawa, H. S.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, S.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bedikian, S.; Bednyakov, V. A.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, K.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Benary, O.; Benchekroun, D.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez Garcia, J. A.; Benjamin, D. P.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Beringer, J.; Bernard, C.; Bernat, P.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertsche, C.; Bertsche, D.; Besana, M. I.; Besjes, G. J.; Bessidskaia Bylund, O.; Bessner, M.; Besson, N.; Betancourt, C.; Bethke, S.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Bieniek, S. P.; Bierwagen, K.; Biesiada, J.; Biglietti, M.; Bilbao De Mendizabal, J.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boddy, C. R.; Boehler, M.; Boek, T. T.; Bogaerts, J. A.; Bogdanchikov, A. G.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Borri, M.; Borroni, S.; Bortfeldt, J.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boutouil, S.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brazzale, S. F.; Brelier, B.; Brendlinger, K.; Brennan, A. J.; Brenner, R.; Bressler, S.; Bristow, K.; Bristow, T. M.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Bromberg, C.; Bronner, J.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Brown, J.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Bryngemark, L.; Buanes, T.; Buat, Q.; Bucci, F.; Buchholz, P.; Buckingham, R. M.; Buckley, A. G.; Buda, S. I.; Budagov, I. A.; Buehrer, F.; Bugge, L.; Bugge, M. K.; Bulekov, O.; Bundock, A. C.; Burckhart, H.; Burdin, S.; Burghgrave, B.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Buszello, C. P.; Butler, B.; Butler, J. M.; Butt, A. I.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Byszewski, M.; Cabrera Urbán, S.; Caforio, D.; Cakir, O.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Calkins, R.; Caloba, L. P.; Calvet, D.; Calvet, S.; Camacho Toro, R.; Camarda, S.; Cameron, D.; Caminada, L. M.; Caminal Armadans, R.; Campana, S.; Campanelli, M.; Campoverde, A.; Canale, V.; Canepa, A.; Cano Bret, M.; Cantero, J.; Cantrill, R.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Castaneda-Miranda, E.; Castelli, A.; Castillo Gimenez, V.; Castro, N. F.; Catastini, P.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Cattani, G.; Caudron, J.; Caughron, S.; Cavaliere, V.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerio, B. C.; Cerny, K.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chalupkova, I.; Chang, P.; Chapleau, B.; Chapman, J. D.; Charfeddine, D.; Charlton, D. G.; Chau, C. C.; Chavez Barajas, C. A.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, L.; Chen, S.; Chen, X.; Chen, Y.; Chen, Y.; Cheng, H. C.; Cheng, Y.; Cheplakov, A.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiefari, G.; Childers, J. T.; Chilingarov, A.; Chiodini, G.; Chisholm, A. S.; Chislett, R. T.; Chitan, A.; Chizhov, M. V.; Chouridou, S.; Chow, B. K. B.; Chromek-Burckhart, D.; Chu, M. L.; Chudoba, J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciocio, A.; Cirkovic, P.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, P. J.; Clarke, R. N.; Cleland, W.; Clemens, J. C.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coffey, L.; Cogan, J. G.; Coggeshall, J.; Cole, B.; Cole, S.; Colijn, A. P.; Collot, J.; Colombo, T.; Colon, G.; Compostella, G.; Conde Muiño, P.; Coniavitis, E.; Conidi, M. C.; Connell, S. H.; Connelly, I. A.; Consonni, S. M.; Consorti, V.; Constantinescu, S.; Conta, C.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cooper-Smith, N. J.; Copic, K.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Côté, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Crispin Ortuzar, M.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cuciuc, C.-M.; Cuhadar Donszelmann, T.; Cummings, J.; Curatolo, M.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dafinca, A.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Daniells, A. C.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, E.; Davies, M.; Davignon, O.; Davison, A. R.; Davison, P.; Davygora, Y.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Nooij, L.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J. B.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dechenaux, B.; Dedovich, D. V.; Deigaard, I.; Del Peso, J.; Del Prete, T.; Deliot, F.; Delitzsch, C. M.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Ciaccio, A.; Di Ciaccio, L.; Di Domenico, A.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Mattia, A.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Dietzsch, T. A.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dionisi, C.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Do Valle Wemans, A.; Doan, T. K. O.; Dobos, D.; Doglioni, C.; Doherty, T.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Dris, M.; Dubbert, J.; Dube, S.; Dubreuil, E.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudziak, F.; Duflot, L.; Duguid, L.; Dührssen, M.; Dunford, M.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Dwuznik, M.; Dyndal, M.; Ebke, J.; Edson, W.; Edwards, N. C.; Ehrenfeld, W.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Engelmann, R.; Erdmann, J.; Ereditato, A.; Eriksson, D.; Ernis, G.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Favareto, A.; Fayard, L.; Federic, P.; Fedin, O. L.; Fedorko, W.; Fehling-Kaschek, M.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Fernandez Perez, S.; Ferrag, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Ferretto Parodi, A.; Fiascaris, M.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, J.; Fisher, W. C.; Fitzgerald, E. A.; Flechl, M.; Fleck, I.; Fleischmann, P.; Fleischmann, S.; Fletcher, G. T.; Fletcher, G.; Flick, T.; Floderus, A.; Flores Castillo, L. R.; Florez Bustos, A. C.; Flowerdew, M. J.; Formica, A.; Forti, A.; Fortin, D.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Franchino, S.; Francis, D.; Franconi, L.; Franklin, M.; Franz, S.; Fraternali, M.; French, S. T.; Friedrich, C.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fulsom, B. G.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallo, V.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gandrajula, R. P.; Gao, J.; Gao, Y. S.; Garay Walls, F. M.; Garberson, F.; García, C.; García Navarro, J. E.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gatti, C.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gauzzi, P.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Ge, P.; Gecse, Z.; Gee, C. N. P.; Geerts, D. A. A.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Gemmell, A.; Genest, M. H.; Gentile, S.; George, M.; George, S.; Gerbaudo, D.; Gershon, A.; Ghazlane, H.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giangiobbe, V.; Giannetti, P.; Gianotti, F.; Gibbard, B.; Gibson, S. M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giordano, R.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giugni, D.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Glonti, G. L.; Goblirsch-Kolb, M.; Goddard, J. R.; Godfrey, J.; Godlewski, J.; Goeringer, C.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gomez Fajardo, L. S.; Gonçalo, R.; Goncalves Pinto Firmino Da Costa, J.; Gonella, L.; González de la Hoz, S.; Gonzalez Parra, G.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Gouighri, M.; Goujdami, D.; Goulette, M. P.; Goussiou, A. G.; Goy, C.; Gozpinar, S.; Grabas, H. M. X.; Graber, L.; Grabowska-Bold, I.; Grafström, P.; Grahn, K.-J.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Gray, H. M.; Graziani, E.; Grebenyuk, O. G.; Greenwood, Z. D.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grishkevich, Y. V.; Grivaz, J.-F.; Grohs, J. P.; Grohsjean, A.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Groth-Jensen, J.; Grout, Z. J.; Guan, L.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Guicheney, C.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Gupta, S.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guttman, N.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Haefner, P.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Hall, D.; Halladjian, G.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamer, M.; Hamilton, A.; Hamilton, S.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harper, D.; Harrington, R. D.; Harris, O. M.; Harrison, P. F.; Hartjes, F.; Hasegawa, M.; Hasegawa, S.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, A. D.; Hayashi, T.; Hayden, D.; Hays, C. P.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, L.; Hejbal, J.; Helary, L.; Heller, C.; Heller, M.; Hellman, S.; Hellmich, D.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Hengler, C.; Henrichs, A.; Henriques Correia, A. M.; Henrot-Versille, S.; Hensel, C.; Herbert, G. H.; Hernández Jiménez, Y.; Herrberg-Schubert, R.; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillert, S.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hoffman, J.; Hoffmann, D.; Hohlfeld, M.; Holmes, T. R.; Hong, T. M.; Hooft van Huysduynen, L.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howard, J.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hsu, C.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, X.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Hurwitz, M.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikematsu, K.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Inamaru, Y.; Ince, T.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Irles Quiles, A.; Isaksson, C.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Iturbe Ponce, J. M.; Iuppa, R.; Ivarsson, J.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jackson, B.; Jackson, M.; Jackson, P.; Jaekel, M. R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jakubek, J.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansen, H.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanty, L.; Jejelava, J.; Jeng, G.-Y.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, Y.; Jimenez Belenguer, M.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Joergensen, M. D.; Johansson, K. E.; Johansson, P.; Johns, K. A.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Joshi, K. D.; Jovicevic, J.; Ju, X.; Jung, C. A.; Jungst, R. M.; Jussel, P.; Juste Rozas, A.; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kajomovitz, E.; Kalderon, C. W.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneda, M.; Kaneti, S.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kar, D.; Karakostas, K.; Karastathis, N.; Karnevskiy, M.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kashif, L.; Kasieczka, G.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Katre, A.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Kazarinov, M. Y.; Keeler, R.; Kehoe, R.; Keil, M.; Keller, J. S.; Kempster, J. J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Kessoku, K.; Keung, J.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Khodinov, A.; Khomich, A.; Khoo, T. J.; Khoriauli, G.; Khoroshilov, A.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kim, H. Y.; Kim, H.; Kim, S. H.; Kimura, N.; Kind, O.; King, B. T.; King, M.; King, R. S. B.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kittelmann, T.; Kiuchi, K.; Kladiva, E.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Klok, P. F.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koevesarki, P.; Koffas, T.; Koffeman, E.; Kogan, L. A.; Kohlmann, S.; Kohout, Z.; Kohriki, T.; Koi, T.; Kolanoski, H.; Koletsou, I.; Koll, J.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; König, S.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Korotkov, V. A.; Kortner, O.; Kortner, S.; Kostyukhin, V. V.; Kotov, V. M.; Kotwal, A.; Kourkoumelis, C.; Kouskoura, V.; Koutsman, A.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kral, V.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kreiss, S.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Kruker, T.; Krumnack, N.; Krumshteyn, Z. V.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kuday, S.; Kuehn, S.; Kugel, A.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunkle, J.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kurumida, R.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; La Rosa, A.; La Rotonda, L.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Laier, H.; Lambourne, L.; Lammers, S.; Lampen, C. L.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, H.; Lee, J. S. H.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Lehmacher, M.; Lehmann Miotto, G.; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leone, R.; Leone, S.; Leonhardt, K.; Leonidopoulos, C.; Leontsinis, S.; Leroy, C.; Lester, C. G.; Lester, C. M.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, A.; Lewis, G. H.; Leyko, A. M.; Leyton, M.; Li, B.; Li, B.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, S.; Li, Y.; Liang, Z.; Liao, H.; Liberti, B.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limbach, C.; Limosani, A.; Lin, S. C.; Lin, T. H.; Linde, F.; Lindquist, B. E.; Linnemann, J. T.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y.; Livan, M.; Livermore, S. S. A.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo Sterzo, F.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Lombardo, V. P.; Long, B. A.; Long, J. D.; Long, R. E.; Lopes, L.; Lopez Mateos, D.; Lopez Paredes, B.; Lopez Paz, I.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Loscutoff, P.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lowe, A. J.; Lu, F.; Lu, N.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lungwitz, M.; Lynn, D.; Lysak, R.; Lytken, E.; Ma, H.; Ma, L. L.; Maccarrone, G.; Macchiolo, A.; Machado Miguens, J.; Macina, D.; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeno, M.; Maeno, T.; Magradze, E.; Mahboubi, K.; Mahlstedt, J.; Mahmoud, S.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Mal, P.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyshev, V. M.; Malyukov, S.; Mamuzic, J.; Mandelli, B.; Mandelli, L.; Mandić, I.; Mandrysch, R.; Maneira, J.; Manfredini, A.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J.; Mann, A.; Manning, P. M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mantifel, R.; Mapelli, L.; March, L.; Marchand, J. F.; Marchiori, G.; Marcisovsky, M.; Marino, C. P.; Marjanovic, M.; Marques, C. N.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti, L. F.; Marti-Garcia, S.; Martin, B.; Martin, B.; Martin, T. A.; Martin, V. J.; Martin dit Latour, B.; Martinez, H.; Martinez, M.; Martin-Haugh, S.; Martyniuk, A. C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massa, L.; Massol, N.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazzaferro, L.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McCubbin, N. A.; McFarlane, K. W.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McPherson, R. A.; Meade, A.; Mechnich, J.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Melachrinos, C.; Mellado Garcia, B. R.; Meloni, F.; Mengarelli, A.; Menke, S.; Meoni, E.; Mercurio, K. M.; Mergelmeyer, S.; Meric, N.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Merritt, H.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Middleton, R. P.; Migas, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Milstein, D.; Minaenko, A. A.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mirabelli, G.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Mitsui, S.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Mohr, W.; Molander, S.; Moles-Valls, R.; Mönig, K.; Monini, C.; Monk, J.; Monnier, E.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Moraes, A.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morgenstern, M.; Morii, M.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Morvaj, L.; Moser, H. G.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, K.; Mueller, T.; Mueller, T.; Muenstermann, D.; Munwes, Y.; Murillo Quijada, J. A.; Murray, W. J.; Musheghyan, H.; Musto, E.; Myagkov, A. G.; Myska, M.; Nackenhorst, O.; Nadal, J.; Nagai, K.; Nagai, R.; Nagai, Y.; Nagano, K.; Nagarkar, A.; Nagasaka, Y.; Nagel, M.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Nanava, G.; Narayan, R.; Nattermann, T.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Nef, P. D.; Negri, A.; Negri, G.; Negrini, M.; Nektarijevic, S.; Nelson, A.; Nelson, T. K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Nickerson, R. B.; Nicolaidou, R.; Nicquevert, B.; Nielsen, J.; Nikiforou, N.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolics, K.; Nikolopoulos, K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Norberg, S.; Nordberg, M.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; Nuti, F.; O'Brien, B. J.; O'grady, F.; O'Neil, D. C.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Okamura, W.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Olchevski, A. G.; Olivares Pino, S. A.; Oliveira Damazio, D.; Oliver Garcia, E.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onyisi, P. U. E.; Oram, C. J.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Oropeza Barrera, C.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero y Garzon, G.; Otono, H.; Ouchrif, M.; Ouellette, E. A.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Ovcharova, A.; Owen, M.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Padilla Aranda, C.; Pagáčová, M.; Pagan Griso, S.; Paganis, E.; Pahl, C.; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Palka, M.; Pallin, D.; Palma, A.; Palmer, J. D.; Pan, Y. B.; Panagiotopoulou, E.; Panduro Vazquez, J. G.; Pani, P.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, M. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pasqualucci, E.; Passaggio, S.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Patel, N. D.; Pater, J. R.; Patricelli, S.; Pauly, T.; Pearce, J.; Pedersen, M.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Peng, H.; Penning, B.; Penwell, J.; Perepelitsa, D. V.; Perez Codina, E.; Pérez García-Estañ, M. T.; Perez Reale, V.; Perini, L.; Pernegger, H.; Perrino, R.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petrolo, E.; Petrucci, F.; Pettersson, N. E.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Piegaia, R.; Pignotti, D. T.; Pilcher, J. E.; Pilkington, A. D.; Pina, J.; Pinamonti, M.; Pinder, A.; Pinfold, J. L.; Pingel, A.; Pinto, B.; Pires, S.; Pitt, M.; Pizio, C.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Poddar, S.; Podlyski, F.; Poettgen, R.; Poggioli, L.; Pohl, D.; Pohl, M.; Polesello, G.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Portell Bueso, X.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Pralavorio, P.; Pranko, A.; Prasad, S.; Pravahan, R.; Prell, S.; Price, D.; Price, J.; Price, L. E.; Prieur, D.; Primavera, M.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopapadaki, E.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Przysiezniak, H.; Ptacek, E.; Puddu, D.; Pueschel, E.; Puldon, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Qureshi, A.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Rajagopalan, S.; Rammensee, M.; Randle-Conde, A. S.; Rangel-Smith, C.; Rao, K.; Rauscher, F.; Rave, T. C.; Ravenscroft, T.; Raymond, M.; Read, A. L.; Readioff, N. P.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reisin, H.; Relich, M.; Rembser, C.; Ren, H.; Ren, Z. L.; Renaud, A.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Ridel, M.; Rieck, P.; Rieger, J.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Rodrigues, L.; Roe, S.; Røhne, O.; Rolli, S.; Romaniouk, A.; Romano, M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, M.; Rose, P.; Rosendahl, P. L.; Rosenthal, O.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rubinskiy, I.; Rud, V. I.; Rudolph, C.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryder, N. C.; Saavedra, A. F.; Sacerdoti, S.; Saddique, A.; Sadeh, I.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Saleem, M.; Salek, D.; Sales De Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Sanchez Martinez, V.; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandoval, T.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Santoyo Castillo, I.; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sartisohn, G.; Sasaki, O.; Sasaki, Y.; Sauvage, G.; Sauvan, E.; Savard, P.; Savu, D. O.; Sawyer, C.; Sawyer, L.; Saxon, D. H.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schaefer, D.; Schaefer, R.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Scherzer, M. I.; Schiavi, C.; Schieck, J.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt, E.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schneider, B.; Schnellbach, Y. J.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schorlemmer, A. L. S.; Schott, M.; Schouten, D.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schroeder, C.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwegler, Ph.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Schwoerer, M.; Sciacca, F. G.; Scifo, E.; Sciolla, G.; Scott, W. G.; Scuri, F.; Scutti, F.; Searcy, J.; Sedov, G.; Sedykh, E.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekula, S. J.; Selbach, K. E.; Seliverstov, D. M.; Sellers, G.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Serre, T.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Shochet, M. J.; Short, D.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Shushkevich, S.; Sicho, P.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silver, Y.; Silverstein, D.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simoniello, R.; Simonyan, M.; Sinervo, P.; Sinev, N. B.; Sipica, V.; Siragusa, G.; Sircar, A.; Sisakyan, A. N.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skottowe, H. P.; Skovpen, K. Yu.; Skubic, P.; Slater, M.; Slavicek, T.; Sliwa, K.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, K. M.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snidero, G.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Solans, C. A.; Solar, M.; Solc, J.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Song, H. Y.; Soni, N.; Sood, A.; Sopczak, A.; Sopko, B.; Sopko, V.; Sorin, V.; Sosebee, M.; Soualah, R.; Soueid, P.; Soukharev, A. M.; South, D.; Spagnolo, S.; Spanò, F.; Spearman, W. R.; Spettel, F.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; Spreitzer, T.; Spurlock, B.; St. Denis, R. D.; Staerz, S.; Stahlman, J.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, J.; Staroba, P.; Starovoitov, P.; Staszewski, R.; Stavina, P.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stern, S.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, E.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Subramaniam, R.; Succurro, A.; Sugaya, Y.; Suhr, C.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, Y.; Svatos, M.; Swedish, S.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tam, J. Y. C.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tanasijczuk, A. J.; Tannenwald, B. B.; Tannoury, N.; Tapprogge, S.; Tarem, S.; Tarrade, F.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, F. E.; Taylor, G. N.; Taylor, W.; Teischinger, F. A.; Teixeira Dias Castanheira, M.; Teixeira-Dias, P.; Temming, K. K.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Therhaag, J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, P. D.; Thompson, R. J.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Thong, W. M.; Thun, R. P.; Tian, F.; Tibbetts, M. J.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tiouchichine, E.; Tipton, P.; Tisserant, S.; Todorov, T.; Todorova-Nova, S.; Toggerson, B.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tollefson, K.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Topilin, N. D.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Tran, H. L.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; True, P.; Trzebinski, M.; Trzupek, A.; Tsarouchas, C.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsionou, D.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turk Cakir, I.; Turra, R.; Tuts, P. M.; Tykhonov, A.; Tylmad, M.; Tyndel, M.; Uchida, K.; Ueda, I.; Ueno, R.; Ughetto, M.; Ugland, M.; Uhlenbrock, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urbaniec, D.; Urquijo, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Valladolid Gallego, E.; Vallecorsa, S.; Valls Ferrer, J. A.; Van Den Wollenberg, W.; Van Der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; Van Der Leeuw, R.; van der Ster, D.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vazeille, F.; Vazquez Schroeder, T.; Veatch, J.; Veloso, F.; Velz, T.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigne, R.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Virzi, J.; Vivarelli, I.; Vives Vaque, F.; Vlachos, S.; Vladoiu, D.; Vlasak, M.; Vogel, A.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vu Anh, T.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Waller, P.; Walsh, B.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Warsinsky, M.; Washbrook, A.; Wasicki, C.; Watkins, P. M.; Watson, A. T.; Watson, I. J.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weigell, P.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wendland, D.; Weng, Z.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; White, A.; White, M. J.; White, R.; White, S.; Whiteson, D.; Wicke, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wijeratne, P. A.; Wildauer, A.; Wildt, M. A.; Wilkens, H. G.; Will, J. Z.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, A.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winter, B. T.; Wittgen, M.; Wittig, T.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wright, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wulf, E.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xiao, M.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yakabe, R.; Yamada, M.; Yamaguchi, H.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, K.; Yamamoto, S.; Yamamura, T.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, U. K.; Yang, Y.; Yanush, S.; Yao, L.; Yao, W.-M.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yen, A. L.; Yildirim, E.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yurkewicz, A.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zevi della Porta, G.; Zhang, D.; Zhang, F.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, X.; Zhang, Z.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, L.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Zinonos, Z.; Ziolkowski, M.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zurzolo, G.; Zutshi, V.; Zwalinski, L.
2015-09-01
Measurements of the centrality and rapidity dependence of inclusive jet production in √{sNN} = 5.02 TeV proton-lead (p + Pb) collisions and the jet cross-section in √{ s} = 2.76 TeV proton-proton collisions are presented. These quantities are measured in datasets corresponding to an integrated luminosity of 27.8 nb-1 and 4.0 pb-1, respectively, recorded with the ATLAS detector at the Large Hadron Collider in 2013. The p + Pb collision centrality was characterised using the total transverse energy measured in the pseudorapidity interval - 4.9 < η < - 3.2 in the direction of the lead beam. Results are presented for the double-differential per-collision yields as a function of jet rapidity and transverse momentum (pT) for minimum-bias and centrality-selected p + Pb collisions, and are compared to the jet rate from the geometric expectation. The total jet yield in minimum-bias events is slightly enhanced above the expectation in a pT-dependent manner but is consistent with the expectation within uncertainties. The ratios of jet spectra from different centrality selections show a strong modification of jet production at all pT at forward rapidities and for large pT at mid-rapidity, which manifests as a suppression of the jet yield in central events and an enhancement in peripheral events. These effects imply that the factorisation between hard and soft processes is violated at an unexpected level in proton-nucleus collisions. Furthermore, the modifications at forward rapidities are found to be a function of the total jet energy only, implying that the violations may have a simple dependence on the hard parton-parton kinematics.
Code of Federal Regulations, 2014 CFR
2014-04-01
... program. All factors that comprise the State transportation department's (STD) determination of the... which are performed in the STD's central laboratory would not be covered by an independent assurance... STD. As a minimum, the qualification program shall include provisions for checking test equipment and...
Code of Federal Regulations, 2012 CFR
2012-04-01
... program. All factors that comprise the State transportation department's (STD) determination of the... which are performed in the STD's central laboratory would not be covered by an independent assurance... STD. As a minimum, the qualification program shall include provisions for checking test equipment and...
Code of Federal Regulations, 2013 CFR
2013-04-01
... program. All factors that comprise the State transportation department's (STD) determination of the... which are performed in the STD's central laboratory would not be covered by an independent assurance... STD. As a minimum, the qualification program shall include provisions for checking test equipment and...
Code of Federal Regulations, 2011 CFR
2011-04-01
... program. All factors that comprise the State transportation department's (STD) determination of the... which are performed in the STD's central laboratory would not be covered by an independent assurance... STD. As a minimum, the qualification program shall include provisions for checking test equipment and...
Wong, Carlos K H; Lang, Brian Hung-Hin
2014-03-01
Although prophylactic central neck dissection (pCND) may reduce future locoregional recurrence after total thyroidectomy (TT) for low-risk papillary thyroid carcinoma (PTC), it is associated with a higher initial morbidity. We aimed to compare the long-term cost-effectiveness between TT with pCND (TT+pCND) and TT alone in the institution's perspective. Our case definition was a hypothetical cohort of 100,000 nonpregnant female patients aged 50 years with a 1.5-cm cN0 PTC within one lobe. A Markov decision tree model was constructed to compare the estimated cost-effectiveness between TT+pCND and TT alone after a 20-year period. Outcome probabilities, utilities, and costs were estimated from the literature. The threshold for cost-effectiveness was set at US$50,000 per quality-adjusted life year (QALY). Sensitivity and threshold analyses were used to examine model uncertainty. Each patient who underwent TT+pCND instead of TT alone cost an extra US$34.52 but gained an additional 0.323 QALY. In fact, in the sensitivity analysis, TT+pCND became cost-effective 9 years after the initial operation. In the threshold analysis, none of the scenarios that could change this conclusion appeared clinically possible or likely. However, TT+pCND became cost-saving (i.e., less costly and more cost-effective) at 20 years if associated permanent vocal cord palsy was kept ≤ 1.37 %, permanent hypoparathyroidism was ≤ 1.20 %, and/or postoperative radioiodine ablation use was ≤ 73.64 %. In the institution's perspective, routine pCND for low-risk PTC began to become cost-effective 9 years after initial surgery and became cost-saving at 20 years if postoperative radioiodine use and/or permanent surgical complications were kept to a minimum.
Late Miocene uplift in the Romagnan Apennines and the detachment of subducted lithosphere
NASA Astrophysics Data System (ADS)
van der Meulen, M. J.; Kouwenhoven, T. J.; van der Zwaan, G. J.; Meulenkamp, J. E.; Wortel, M. J. R.
1999-12-01
We report part of a test of the hypothesis that detachment of subducted lithosphere may be a process of lateral propagation of a horizontal tear [Wortel and Spakman, Proc. Kon. Ned. Akad. Wetensch., 95 (1992) 325-347]. We have used the Apennines as a test area. The test procedure consists of the comparison of hypothetical vertical motions, predicted from the expected redistribution of slab pull forces, with observed vertical motions. We demonstrate that a Late Miocene depocentre migration from the Northern towards the Central Apennines is associated with uplift of (the fore-arc of) the Northern Apennines. Such a combination of a depocentre shift and uplift is thought to be diagnostic for lateral migration of slab detachment. The depocentre migration was identified in earlier work [van der Meulen et al., Earth Planet. Sci. Lett., 154 (1998) 203-219]. This contribution focuses on uplift, which has primarily been identified through the geohistory analysis of the Monte del Casino Section (Romagnan Apennines, Northern Italy). Owing to methodological problems, the start and duration of the uplift phase could not be constrained, and only a minimum estimate of the total amount of uplift (483±180 m) is obtained. The data do allow for an estimate of the uplift rate: 163±61 cm/ky. A review of regional data results in better constraints on the timing of the above lateral reorganisation of the fore-arc, and on the spatial extent of the uplifted area. Depocentre development in the Central Apennines began between 8.6 and 8.3 Ma B.P. Uplift started between 9 and 8 Ma B.P., and affected the entire northernmost Apennines.
NASA Astrophysics Data System (ADS)
Wicaksono, A. M.; Pramono, A.; Susilowati, A.; Sutarno; Widyas, N.; Prastowo, S.
2018-03-01
Boyolali is an area in Central Java Indonesia, it has large number of Indonesian Friesian Holstein (IFH; dairy cattle). To improve its population as well as genetic quality of milk production, artificial insemination (AI) is widely applied as mating program. The success of AI can be evaluated from the number of service per conception (S/C), represent a number of service using AI to achieve one pregnancy. Its mirroring mating management and reproductive efficiency in dairy cattle, estimated in herd during specific time and location. For that, this study aims to estimate S/C in Selo, Boyolali during October 2016 to January 2017. Data were gathered with 95% confidence level. Sample size were 367 IFH, visited and selected purposively based on criteria one-time partus, 3 y.o and have complete AI record. Animal data were collected in reproduction and mating management. In addition, 124 dairy farmer who have minimum 5 years experiences in rearing IFH cow were interviewed as respondent in estrus detection, followed with 2 skilled inseminators for AI performing time data. Result shows that S/C is 1.71, this mean one pregnancy need 1.71 times AI services. In the estrus detection, most of dairy farmers were able to observe estrus sign in vulva color, size and the present of mucus by visual. Moreover, AI was performed in 9 to 12 hours after the sign of estrus observed. It is concluded that AI of IFH in Selo, Boyolali has been successfully applied, however there are still rooms to improve the reproduction efficiency through mating management in regard to lower S/C.
Sunspot Observations During the Maunder Minimum from the Correspondence of John Flamsteed
NASA Astrophysics Data System (ADS)
Carrasco, V. M. S.; Vaquero, J. M.
2016-11-01
We compile and analyze the sunspot observations made by John Flamsteed for the period 1672 - 1703, which corresponds to the second part of the Maunder Minimum. They appear in the correspondence of the famous astronomer. We include in an appendix the original texts of the sunspot records kept by Flamsteed. We compute an estimate of the level of solar activity using these records, and compare the results with the latest reconstructions of solar activity during the Maunder Minimum, obtaining values characteristic of a grand solar minimum. Finally, we discuss a phenomenon observed and described by Stephen Gray in 1705 that has been interpreted as a white-light flare.
Mismatch and G-Stack Modulated Probe Signals on SNP Microarrays
Binder, Hans; Fasold, Mario; Glomb, Torsten
2009-01-01
Background Single nucleotide polymorphism (SNP) arrays are important tools widely used for genotyping and copy number estimation. This technology utilizes the specific affinity of fragmented DNA for binding to surface-attached oligonucleotide DNA probes. We analyze the variability of the probe signals of Affymetrix GeneChip SNP arrays as a function of the probe sequence to identify relevant sequence motifs which potentially cause systematic biases of genotyping and copy number estimates. Methodology/Principal Findings The probe design of GeneChip SNP arrays enables us to disentangle different sources of intensity modulations such as the number of mismatches per duplex, matched and mismatched base pairings including nearest and next-nearest neighbors and their position along the probe sequence. The effect of probe sequence was estimated in terms of triple-motifs with central matches and mismatches which include all 256 combinations of possible base pairings. The probe/target interactions on the chip can be decomposed into nearest neighbor contributions which correlate well with free energy terms of DNA/DNA-interactions in solution. The effect of mismatches is about twice as large as that of canonical pairings. Runs of guanines (G) and the particular type of mismatched pairings formed in cross-allelic probe/target duplexes constitute sources of systematic biases of the probe signals with consequences for genotyping and copy number estimates. The poly-G effect seems to be related to the crowded arrangement of probes which facilitates complex formation of neighboring probes with at minimum three adjacent G's in their sequence. Conclusions The applied method of “triple-averaging” represents a model-free approach to estimate the mean intensity contributions of different sequence motifs which can be applied in calibration algorithms to correct signal values for sequence effects. Rules for appropriate sequence corrections are suggested. PMID:19924253
The 2014 X-Ray Minimum of Eta Carinae as Seen by Swift
NASA Technical Reports Server (NTRS)
Corcoran, M. F.; Liburd, J.; Morris, D.; Russell, C. M. P.; Hamaguchi, K.; Gull, T. R.; Madura, T. I.; Teodoro, M.; Moffat, A. F. J.; Richardson, N. D.
2017-01-01
We report on Swift X-ray Telescope observations of Eta Carinae ( Car), an extremely massive, long-period, highly eccentric binary obtained during the 2014.6 X-ray minimumperiastron passage. These observations show that Car may have been particularly bright in X-rays going into the X-ray minimum state, while the duration of the 2014 X-ray minimum was intermediate between the extended minima seen in 1998.0 and 2003.5 by Rossi X-Ray Timing Explorer (RXTE), and the shorter minimum in 2009.0. The hardness ratios derived from the Swift observations showed a relatively smooth increase to a peak value occurring 40.5 days after the start of the X-ray minimum, though these observations cannot reliably measure the X-ray hardness during the deepest part of the X-ray minimum when contamination by the central constant emission component is significant. By comparing the timings of the RXTE and Swift observations near the X-ray minima, we derive an updated X-ray period of P X equals 2023.7 +/- 0.7 days, in good agreement with periods derived from observations at other wavelengths, and we compare the X-ray changes with variations in the He ii lambda 4686 emission. The middle of the Deep Minimum interval, as defined by the Swift column density variations, is in good agreement with the time of periastron passage derived from the He ii 4686 line variations.
Minimum Expected Risk Estimation for Near-neighbor Classification
2006-04-01
We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors ( kNN ...estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant...the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
NASA Astrophysics Data System (ADS)
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Flying After Conducting an Aircraft Excessive Cabin Leakage Test.
Houston, Stephen; Wilkinson, Elizabeth
2016-09-01
Aviation medical specialists should be aware that commercial airline aircraft engineers may undertake a 'dive equivalent' operation while conducting maintenance activities on the ground. We present a worked example of an occupational risk assessment to determine a minimum safe preflight surface interval (PFSI) for an engineer before flying home to base after conducting an Excessive Cabin Leakage Test (ECLT) on an unserviceable aircraft overseas. We use published dive tables to determine the minimum safe PFSI. The estimated maximum depth acquired during the procedure varies between 10 and 20 fsw and the typical estimated bottom time varies between 26 and 53 min for the aircraft types operated by the airline. Published dive tables suggest that no minimum PFSI is required for such a dive profile. Diving tables suggest that no minimum PFSI is required for the typical ECLT dive profile within the airline; however, having conducted a risk assessment, which considered peak altitude exposure during commercial flight, the worst-case scenario test dive profile, the variability of interindividual inert gas retention, and our existing policy among other occupational groups within the airline, we advised that, in the absence of a bespoke assessment of the particular circumstances on the day, the minimum PFSI after conducting ECLT should be 24 h. Houston S, Wilkinson E. Flying after conducting an aircraft excessive cabin leakage test. Aerosp Med Hum Perform. 2016; 87(9):816-820.
Low-flow characteristics of streams in Ohio through water year 1997
Straub, David E.
2001-01-01
This report presents selected low-flow and flow-duration characteristics for 386 sites throughout Ohio. These sites include 195 long-term continuous-record stations with streamflow data through water year 1997 (October 1 to September 30) and for 191 low-flow partial-record stations with measurements into water year 1999. The characteristics presented for the long-term continuous-record stations are minimum daily streamflow; average daily streamflow; harmonic mean flow; 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 5-, 10-, 20-, and 50-year recurrence intervals; and 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 20-, and 10-percent daily duration flows. The characteristics presented for the low-flow partial-record stations are minimum observed streamflow; estimated 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 10-, and 20-year recurrence intervals; and estimated 98-, 95-, 90-, 85- and 80-percent daily duration flows. The low-flow frequency and duration analyses were done for three seasonal periods (warm weather, May 1 to November 30; winter, December 1 to February 28/29; and autumn, September 1 to November 30), plus the annual period based on the climatic year (April 1 to March 31).
Heavy ion contributions to organ dose equivalent for the 1977 galactic cosmic ray spectrum
NASA Astrophysics Data System (ADS)
Walker, Steven A.; Townsend, Lawrence W.; Norbury, John W.
2013-05-01
Estimates of organ dose equivalents for the skin, eye lens, blood forming organs, central nervous system, and heart of female astronauts from exposures to the 1977 solar minimum galactic cosmic radiation spectrum for various shielding geometries involving simple spheres and locations within the Space Transportation System (space shuttle) and the International Space Station (ISS) are made using the HZETRN 2010 space radiation transport code. The dose equivalent contributions are broken down by charge groups in order to better understand the sources of the exposures to these organs. For thin shields, contributions from ions heavier than alpha particles comprise at least half of the organ dose equivalent. For thick shields, such as the ISS locations, heavy ions contribute less than 30% and in some cases less than 10% of the organ dose equivalent. Secondary neutron production contributions in thick shields also tend to be as large, or larger, than the heavy ion contributions to the organ dose equivalents.
Prediction by regression and intrarange data scatter in surface-process studies
Toy, T.J.; Osterkamp, W.R.; Renard, K.G.
1993-01-01
Modeling is a major component of contemporary earth science, and regression analysis occupies a central position in the parameterization, calibration, and validation of geomorphic and hydrologic models. Although this methodology can be used in many ways, we are primarily concerned with the prediction of values for one variable from another variable. Examination of the literature reveals considerable inconsistency in the presentation of the results of regression analysis and the occurrence of patterns in the scatter of data points about the regression line. Both circumstances confound utilization and evaluation of the models. Statisticians are well aware of various problems associated with the use of regression analysis and offer improved practices; often, however, their guidelines are not followed. After a review of the aforementioned circumstances and until standard criteria for model evaluation become established, we recommend, as a minimum, inclusion of scatter diagrams, the standard error of the estimate, and sample size in reporting the results of regression analyses for most surface-process studies. ?? 1993 Springer-Verlag.
Local amplification of storm surge by Super Typhoon Haiyan in Leyte Gulf
Mori, Nobuhito; Kato, Masaya; Kim, Sooyoul; Mase, Hajime; Shibutani, Yoko; Takemi, Tetsuya; Tsuboki, Kazuhisa; Yasuda, Tomohiro
2014-01-01
Typhoon Haiyan, which struck the Philippines in November 2013, was an extremely intense tropical cyclone that had a catastrophic impact. The minimum central pressure of Typhoon Haiyan was 895 hPa, making it the strongest typhoon to make landfall on a major island in the western North Pacific Ocean. The characteristics of Typhoon Haiyan and its related storm surge are estimated by numerical experiments using numerical weather prediction models and a storm surge model. Based on the analysis of best hindcast results, the storm surge level was 5–6 m and local amplification of water surface elevation due to seiche was found to be significant inside Leyte Gulf. The numerical experiments show the coherent structure of the storm surge profile due to the specific bathymetry of Leyte Gulf and the Philippines Trench as a major contributor to the disaster in Tacloban. The numerical results also indicated the sensitivity of storm surge forecast. PMID:25821268
Improving stability of regional numerical ocean models
NASA Astrophysics Data System (ADS)
Herzfeld, Mike
2009-02-01
An operational limited-area ocean modelling system was developed to supply forecasts of ocean state out to 3 days. This system is designed to allow non-specialist users to locate the model domain anywhere within the Australasian region with minimum user input. The model is required to produce a stable simulation every time it is invoked. This paper outlines the methodology used to ensure the model remains stable over the wide range of circumstances it might encounter. Central to the model configuration is an alternative approach to implementing open boundary conditions in a one-way nesting environment. Approximately 170 simulations were performed on limited areas in the Australasian region to assess the model stability; of these, 130 ran successfully with a static model parameterisation allowing a statistical estimate of the model’s approach toward instability to be determined. Based on this, when the model was deemed to be approaching instability a strategy of adaptive intervention in the form of constraint on velocity and elevation was invoked to maintain stability.
Global marine protected areas do not secure the evolutionary history of tropical corals and fishes
Mouillot, D.; Parravicini, V.; Bellwood, D. R.; Leprieur, F.; Huang, D.; Cowman, P. F.; Albouy, C.; Hughes, T. P.; Thuiller, W.; Guilhaumon, F.
2016-01-01
Although coral reefs support the largest concentrations of marine biodiversity worldwide, the extent to which the global system of marine-protected areas (MPAs) represents individual species and the breadth of evolutionary history across the Tree of Life has never been quantified. Here we show that only 5.7% of scleractinian coral species and 21.7% of labrid fish species reach the minimum protection target of 10% of their geographic ranges within MPAs. We also estimate that the current global MPA system secures only 1.7% of the Tree of Life for corals, and 17.6% for fishes. Regionally, the Atlantic and Eastern Pacific show the greatest deficit of protection for corals while for fishes this deficit is located primarily in the Western Indian Ocean and in the Central Pacific. Our results call for a global coordinated expansion of current conservation efforts to fully secure the Tree of Life on coral reefs. PMID:26756609
Assessing the Value of Frost Forecasts to Orchardists: A Dynamic Decision-Making Approach.
NASA Astrophysics Data System (ADS)
Katz, Richard W.; Murphy, Allan H.; Winkler, Robert L.
1982-04-01
The methodology of decision analysis is used to investigate the economic value of frost (i.e., minimum temperature) forecasts to orchardists. First, the fruit-frost situation and previous studies of the value of minimum temperature forecasts in this context are described. Then, after a brief overview of decision analysis, a decision-making model for the fruit-frost problem is presented. The model involves identifying the relevant actions and events (or outcomes), specifying the effect of taking protective action, and describing the relationships among temperature, bud loss, and yield loss. A bivariate normal distribution is used to model the relationship between forecast and observed temperatures, thereby characterizing the quality of different types of information. Since the orchardist wants to minimize expenses (or maximize payoffs) over the entire frost-protection season and since current actions and outcomes at any point in the season are related to both previous and future actions and outcomes, the decision-making problem is inherently dynamic in nature. As a result, a class of dynamic models known as Markov decision processes is considered. A computational technique called dynamic programming is used in conjunction with these models to determine the optimal actions and to estimate the value of meteorological information.Some results concerning the value of frost forecasts to orchardists in the Yakima Valley of central Washington are presented for the cases of red delicious apples, bartlett pears, and elberta peaches. Estimates of the parameter values in the Markov decision process are obtained from relevant physical and economic data. Twenty years of National Weather Service forecast and observed temperatures for the Yakima key station are used to estimate the quality of different types of information, including perfect forecasts, current forecasts, and climatological information. The orchardist's optimal actions over the frost-protection season and the expected expenses associated with the use of such information are determined using a dynamic programming algorithm. The value of meteorological information is defined as the difference between the expected expense for the information of interest and the expected expense for climatological information. Over the entire frost-protection season, the value estimates (in 1977 dollars) for current forecasts were $808 per acre for red delicious apples, $492 per acre for bartlett pears, and $270 per acre for elberta peaches. These amounts account for 66, 63, and 47%, respectively, of the economic value associated with decisions based on perfect forecasts. Varying the quality of the minimum temperature forecasts reveals that the relationship between the accuracy and value of such forecasts is nonlinear and that improvements in current forecasts would not be as significant in terms of economic value as were comparable improvements in the past.Several possible extensions of this study of the value of frost forecasts to orchardists are briefly described. Finally, the application of the dynamic model formulated in this paper to other decision-making problems involving the use of meteorological information is mentioned.
Establishment of a center of excellence for applied mathematical and statistical research
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Gray, H. L.
1983-01-01
The state of the art was assessed with regards to efforts in support of the crop production estimation problem and alternative generic proportion estimation techniques were investigated. Topics covered include modeling the greeness profile (Badhwarmos model), parameter estimation using mixture models such as CLASSY, and minimum distance estimation as an alternative to maximum likelihood estimation. Approaches to the problem of obtaining proportion estimates when the underlying distributions are asymmetric are examined including the properties of Weibull distribution.
NASA Astrophysics Data System (ADS)
Costa, Carlos H.; Owen, Lewis A.; Ricci, Walter R.; Johnson, William J.; Halperin, Alan D.
2018-07-01
Trench excavations across the El Molino fault in the southeastern Pampean Ranges of central-western Argentina have revealed a deformation zone composed of opposite-verging thrusts that deform a succession of Holocene sediments. The west-verging thrusts place Precambrian basement over Holocene proximal scarp-derived deposits, whereas the east-verging thrusts form an east-directed fault-propagation fold that deforms colluvium, fluvial and aeolian deposits. Ages for exposed fault-related deposits range from 7.1 ± 0.4 to 0.3 ka. Evidence of surface deformation suggests multiple rupture events with related scarp-derived deposits and a minimum of three surface ruptures younger than 7.1 ± 0.4 ka, the last rupture event being younger than 1 ka. Shortening rates of 0.7 ± 0.2 mm/a are near one order of magnitude higher than those estimated for the faults bounding neighboring crustal blocks and are considered high for this intraplate setting. These ground-rupturing crustal earthquakes are estimated to be of magnitude Mw ≥ 7.0, a significant discrepancy with the magnitudes Mw < 6.5 recorded in the seismic catalog of this region at present with low to moderate seismicity. Results highlight the relevance of identifying primary surface ruptures as well as the seismogenic potential of thrust faults in seemingly stable continental interiors.
Future trends which will influence waste disposal.
Wolman, A
1978-01-01
The disposal and management of solid wastes are ancient problems. The evolution of practices naturally changed as populations grew and sites for disposal became less acceptable. The central search was for easy disposal at minimum costs. The methods changed from indiscriminate dumping to sanitary landfill, feeding to swine, reduction, incineration, and various forms of re-use and recycling. Virtually all procedures have disabilities and rising costs. Many methods once abandoned are being rediscovered. Promises for so-called innovations outstrip accomplishments. Markets for salvage vary widely or disappear completely. The search for conserving materials and energy at minimum cost must go on forever. PMID:570105
Geodesy by radio interferometry: Water vapor radiometry for estimation of the wet delay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elgered, G.; Davis, J.L.; Herring, T.A.
1991-04-10
An important source of error in very-long-baseline interferometry (VLBI) estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. The authors present and discuss the method of using data from a water vapor readiometer (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data of Kalman filtering to correct for atmospheric propagation delay atmore » the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The lengths of the baselines range from 919 to 7,941 km. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. The use of WVR data yielded a 13% smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the best minimum elevation angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass. For use of WVR data along with accurate determinations of total surface pressure, the best minimum is about 20{degrees}; for use of a model for the wet delay based on the humidity and temperature at the ground, the best minimum is about 35{degrees}.« less
Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique
2018-01-22
We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.
Atmospheric Science Data Center
2013-04-19
... scale, with maximum sustained winds of 115 mph (185 kph), and a minimum central pressure of 951 hPa, according to NOAA's National ... an angle of 46 degrees. The storm is visible to the north of Cuba, which is located in the lower left of the image. Irene's eye is covered ...
Economics of Undiscovered Oil and Gas in the North Slope of Alaska: Economic Update and Synthesis
Attanasi, E.D.; Freeman, P.A.
2009-01-01
The U.S. Geological Survey (USGS) has published assessments by geologists of undiscovered conventional oil and gas accumulations in the North Slope of Alaska; these assessments contain a set of scientifically based estimates of undiscovered, technically recoverable quantities of oil and gas in discrete oil and gas accumulations that can be produced with conventional recovery technology. The assessments do not incorporate economic factors such as recovery costs and product prices. The assessors considered undiscovered conventional oil and gas resources in four areas of the North Slope: (1) the central North Slope, (2) the National Petroleum Reserve in Alaska (NPRA), (3) the 1002 Area of the Arctic National Wildlife Refuge (ANWR), and (4) the area west of the NPRA, called in this report the 'western North Slope'. These analyses were prepared at different times with various minimum assessed oil and gas accumulation sizes and with slightly different assumptions. Results of these past studies were recently supplemented with information by the assessment geologists that allowed adjustments for uniform minimum assessed accumulation sizes and a consistent set of assumptions. The effort permitted the statistical aggregation of the assessments of the four areas composing the study area. This economic analysis is based on undiscovered assessed accumulation distributions represented by the four-area aggregation and incorporates updates of costs and technological and fiscal assumptions used in the initial economic analysis that accompanied the geologic assessment of each study area.
Moreira, Patricia V L; Baraldi, Larissa Galastri; Moubarac, Jean-Claude; Monteiro, Carlos Augusto; Newton, Alex; Capewell, Simon; O'Flaherty, Martin
2015-01-01
The global burden of non-communicable diseases partly reflects growing exposure to ultra-processed food products (UPPs). These heavily marketed UPPs are cheap and convenient for consumers and profitable for manufacturers, but contain high levels of salt, fat and sugars. This study aimed to explore the potential mortality reduction associated with future policies for substantially reducing ultra-processed food intake in the UK. We obtained data from the UK Living Cost and Food Survey and from the National Diet and Nutrition Survey. By the NOVA food typology, all food items were categorized into three groups according to the extent of food processing: Group 1 describes unprocessed/minimally processed foods. Group 2 comprises processed culinary ingredients. Group 3 includes all processed or ultra-processed products. Using UK nutrient conversion tables, we estimated the energy and nutrient profile of each food group. We then used the IMPACT Food Policy model to estimate reductions in cardiovascular mortality from improved nutrient intakes reflecting shifts from processed or ultra-processed to unprocessed/minimally processed foods. We then conducted probabilistic sensitivity analyses using Monte Carlo simulation. Approximately 175,000 cardiovascular disease (CVD) deaths might be expected in 2030 if current mortality patterns persist. However, halving the intake of Group 3 (processed) foods could result in approximately 22,055 fewer CVD related deaths in 2030 (minimum estimate 10,705, maximum estimate 34,625). An ideal scenario in which salt and fat intakes are reduced to the low levels observed in Group 1 and 2 could lead to approximately 14,235 (minimum estimate 6,680, maximum estimate 22,525) fewer coronary deaths and approximately 7,820 (minimum estimate 4,025, maximum estimate 12,100) fewer stroke deaths, comprising almost 13% mortality reduction. This study shows a substantial potential for reducing the cardiovascular disease burden through a healthier food system. It highlights the crucial importance of implementing healthier UK food policies.
Labson, Victor F.; Clark, Roger N.; Swayze, Gregg A.; Hoefen, Todd M.; Kokaly, Raymond F.; Livo, K. Eric; Powers, Michael H.; Plumlee, Geoffrey S.; Meeker, Gregory P.
2010-01-01
All of the calculations and results in this report are preliminary and intended for the purpose, and only for the purpose, of aiding the incident team in assessing the extent of the spilled oil for ongoing response efforts. Other applications of this report are not authorized and are not considered valid. Because of time constraints and limitations of data available to the experts, many of their estimates are approximate, are subject to revision, and certainly should not be used as the Federal Government's final values for assessing volume of the spill or its impact to the environment or to coastal communities. Each expert that contributed to this report reserves the right to alter his conclusions based upon further analysis or additional information. An estimated minimum total oil discharge was determined by calculations of oil volumes measured as of May 17, 2010. This included oil on the ocean surface measured with satellite and airborne images and with spectroscopic data (129,000 barrels to 246,000 barrels using less and more aggressive assumptions, respectively), oil skimmed off the surface (23,500 barrels from U.S. Coast Guard [USCG] estimates), oil burned off the surface (11,500 barrels from USCG estimates), dispersed subsea oil (67,000 to 114,000 barrels), and oil evaporated or dissolved (109,000 to 185,000 barrels). Sedimentation (oil captured from Mississippi River silt and deposited on the ocean bottom), biodegradation, and other processes may indicate significant oil volumes beyond our analyses, as will any subsurface volumes such as suspended tar balls or other emulsions that are not included in our estimates. The lower bounds of total measured volumes are estimated to be within the range of 340,000 to 580,000 barrels as of May 17, 2010, for an estimated average minimum discharge rate of 12,500 to 21,500 barrels per day for 27 days from April 20 to May 17, 2010.
Fernández-Llama, Patricia; Pareja, Júlia; Yun, Sergi; Vázquez, Susana; Oliveras, Anna; Armario, Pedro; Blanch, Pedro; Calero, Francesca; Sierra, Cristina; de la Sierra, Alejandro
2017-01-01
Central blood pressure (BP) has been suggested to be a better estimator of hypertension-associated risks. We aimed to evaluate the association of 24-hour central BP, in comparison with 24-hour peripheral BP, with the presence of renal organ damage in hypertensive patients. Brachial and central (calculated by an oscillometric system through brachial pulse wave analysis) office BP and ambulatory BP monitoring (ABPM) data and aortic pulse wave velocity (PWV) were measured in 208 hypertensive patients. Renal organ damage was evaluated by means of the albumin to creatinine ratio and the estimated glomerular filtration rate. Fifty-four patients (25.9%) were affected by renal organ damage, displaying either microalbuminuria (urinary albumin excretion ≥30 mg/g creatinine) or an estimated glomerular filtration rate (eGFR) <60 ml/min/1.73 m2. Compared to those without renal abnormalities, hypertensive patients with kidney damage had higher values of office brachial systolic BP (SBP) and pulse pressure (PP), and 24-h, daytime, and nighttime central and brachial SBP and PP. They also had a blunted nocturnal decrease in both central and brachial BP, and higher values of aortic PWV. After adjustment for age, gender, and antihypertensive treatment, only ABPM-derived BP estimates (both central and brachial) showed significant associations with the presence of renal damage. Odds ratios for central BP estimates were not significantly higher than those obtained for brachial BP. Compared with peripheral ABPM, cuff-based oscillometric central ABPM does not show a closer association with presence of renal organ damage in hypertensive patients. More studies, however, need to be done to better identify the role of central BP in clinical practice. © 2017 The Author(s). Published by S. Karger AG, Basel.
Havelaar, Arie H; Vazquez, Kathleen M; Topalcengiz, Zeynal; Muñoz-Carpena, Rafael; Danyluk, Michelle D
2017-10-09
The U.S. Food and Drug Administration (FDA) has defined standards for the microbial quality of agricultural surface water used for irrigation. According to the FDA produce safety rule (PSR), a microbial water quality profile requires analysis of a minimum of 20 samples for Escherichia coli over 2 to 4 years. The geometric mean (GM) level of E. coli should not exceed 126 CFU/100 mL, and the statistical threshold value (STV) should not exceed 410 CFU/100 mL. The water quality profile should be updated by analysis of a minimum of five samples per year. We used an extensive set of data on levels of E. coli and other fecal indicator organisms, the presence or absence of Salmonella, and physicochemical parameters in six agricultural irrigation ponds in West Central Florida to evaluate the empirical and theoretical basis of this PSR. We found highly variable log-transformed E. coli levels, with standard deviations exceeding those assumed in the PSR by up to threefold. Lognormal distributions provided an acceptable fit to the data in most cases but may underestimate extreme levels. Replacing censored data with the detection limit of the microbial tests underestimated the true variability, leading to biased estimates of GM and STV. Maximum likelihood estimation using truncated lognormal distributions is recommended. Twenty samples are not sufficient to characterize the bacteriological quality of irrigation ponds, and a rolling data set of five samples per year used to update GM and STV values results in highly uncertain results and delays in detecting a shift in water quality. In these ponds, E. coli was an adequate predictor of the presence of Salmonella in 150-mL samples, and turbidity was a second significant variable. The variability in levels of E. coli in agricultural water was higher than that anticipated when the PSR was finalized, and more detailed information based on mechanistic modeling is necessary to develop targeted risk management strategies.
NASA Astrophysics Data System (ADS)
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
An important issue surrounding assessment of riverine fish assemblages is the minimum amount of sampling distance needed to adequately determine biotic condition. Determining adequate sampling distance is important because sampling distance affects estimates of fish assemblage c...
38 CFR 36.4365 - Appraisal requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...
38 CFR 36.4365 - Appraisal requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...
38 CFR 36.4365 - Appraisal requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...
38 CFR 36.4365 - Appraisal requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...
The minimum distance approach to classification
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1971-01-01
The work to advance the state-of-the-art of miminum distance classification is reportd. This is accomplished through a combination of theoretical and comprehensive experimental investigations based on multispectral scanner data. A survey of the literature for suitable distance measures was conducted and the results of this survey are presented. It is shown that minimum distance classification, using density estimators and Kullback-Leibler numbers as the distance measure, is equivalent to a form of maximum likelihood sample classification. It is also shown that for the parametric case, minimum distance classification is equivalent to nearest neighbor classification in the parameter space.
The Effect of an Increased Minimum Wage on Infant Mortality and Birth Weight
Livingston, Melvin D.; Markowitz, Sara; Wagenaar, Alexander C.
2016-01-01
Objectives. To investigate the effects of state minimum wage laws on low birth weight and infant mortality in the United States. Methods. We estimated the effects of state-level minimum wage laws using a difference-in-differences approach on rates of low birth weight (< 2500 g) and postneonatal mortality (28–364 days) by state and month from 1980 through 2011. All models included state and year fixed effects as well as state-specific covariates. Results. Across all models, a dollar increase in the minimum wage above the federal level was associated with a 1% to 2% decrease in low birth weight births and a 4% decrease in postneonatal mortality. Conclusions. If all states in 2014 had increased their minimum wages by 1 dollar, there would likely have been 2790 fewer low birth weight births and 518 fewer postneonatal deaths for the year. PMID:27310355
The Effect of an Increased Minimum Wage on Infant Mortality and Birth Weight.
Komro, Kelli A; Livingston, Melvin D; Markowitz, Sara; Wagenaar, Alexander C
2016-08-01
To investigate the effects of state minimum wage laws on low birth weight and infant mortality in the United States. We estimated the effects of state-level minimum wage laws using a difference-in-differences approach on rates of low birth weight (< 2500 g) and postneonatal mortality (28-364 days) by state and month from 1980 through 2011. All models included state and year fixed effects as well as state-specific covariates. Across all models, a dollar increase in the minimum wage above the federal level was associated with a 1% to 2% decrease in low birth weight births and a 4% decrease in postneonatal mortality. If all states in 2014 had increased their minimum wages by 1 dollar, there would likely have been 2790 fewer low birth weight births and 518 fewer postneonatal deaths for the year.
Hybrid Stochastic Models for Remaining Lifetime Prognosis
2004-08-01
literature for techniques and comparisons. Os- ogami and Harchol-Balter [70], Perros [73], Johnson [36], and Altiok [5] provide excellent summaries of...and type of PH-distribution approximation for c2 > 0.5 is not as obvious. In order to use the minimum distance estimation, Perros [73] indicated that...moment-matching techniques. Perros [73] indicated that the maximum likelihood and minimum distance techniques require nonlinear optimization. Johnson
Centralized versus distributed propulsion
NASA Technical Reports Server (NTRS)
Clark, J. P.
1982-01-01
The functions and requirements of auxiliary propulsion systems are reviewed. None of the three major tasks (attitude control, stationkeeping, and shape control) can be performed by a collection of thrusters at a single central location. If a centralized system is defined as a collection of separated clusters, made up of the minimum number of propulsion units, then such a system can provide attitude control and stationkeeping for most vehicles. A distributed propulsion system is characterized by more numerous propulsion units in a regularly distributed arrangement. Various proposed large space systems are reviewed and it is concluded that centralized auxiliary propulsion is best suited to vehicles with a relatively rigid core. These vehicles may carry a number of flexible or movable appendages. A second group, consisting of one or more large flexible flat plates, may need distributed propulsion for shape control. There is a third group, consisting of vehicles built up from multiple shuttle launches, which may be forced into a distributed system because of the need to add additional propulsion units as the vehicles grow. The effects of distributed propulsion on a beam-like structure were examined. The deflection of the structure under both translational and rotational thrusts is shown as a function of the number of equally spaced thrusters. When two thrusters only are used it is shown that location is an important parameter. The possibility of using distributed propulsion to achieve minimum overall system weight is also examined. Finally, an examination of the active damping by distributed propulsion is described.
On the minimum quantum requirement of photosynthesis.
Zeinalov, Yuzeir
2009-01-01
An analysis of the shape of photosynthetic light curves is presented and the existence of the initial non-linear part is shown as a consequence of the operation of the non-cooperative (Kok's) mechanism of oxygen evolution or the effect of dark respiration. The effect of nonlinearity on the quantum efficiency (yield) and quantum requirement is reconsidered. The essential conclusions are: 1) The non-linearity of the light curves cannot be compensated using suspensions of algae or chloroplasts with high (>1.0) optical density or absorbance. 2) The values of the maxima of the quantum efficiency curves or the values of the minima of the quantum requirement curves cannot be used for estimation of the exact value of the maximum quantum efficiency and the minimum quantum requirement. The estimation of the maximum quantum efficiency or the minimum quantum requirement should be performed only after extrapolation of the linear part at higher light intensities of the quantum requirement curves to "0" light intensity.
Biochemical methane potential (BMP) tests: Reducing test time by early parameter estimation.
Da Silva, C; Astals, S; Peces, M; Campos, J L; Guerrero, L
2018-01-01
Biochemical methane potential (BMP) test is a key analytical technique to assess the implementation and optimisation of anaerobic biotechnologies. However, this technique is characterised by long testing times (from 20 to >100days), which is not suitable for waste utilities, consulting companies or plants operators whose decision-making processes cannot be held for such a long time. This study develops a statistically robust mathematical strategy using sensitivity functions for early prediction of BMP first-order model parameters, i.e. methane yield (B 0 ) and kinetic constant rate (k). The minimum testing time for early parameter estimation showed a potential correlation with the k value, where (i) slowly biodegradable substrates (k≤0.1d -1 ) have a minimum testing times of ≥15days, (ii) moderately biodegradable substrates (0.1
30 CFR 1206.151 - Definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... interests regarding that contract. To be considered arm's length for any production month, a contract must.... Gathering means the movement of lease production to a central accumulation and/or treatment point on the... is to acquire only the lessee's production and to market that production. Minimum royalty means that...
A Model of Historical Thinking
ERIC Educational Resources Information Center
Seixas, Peter
2017-01-01
"Historical thinking" has a central role in the theory and practice of history education. At a minimum, history educators must work with a model of historical thinking if they are to formulate potential progression in students' advance through a school history curriculum, test that progression empirically, and shape instructional…
Probabilities of having minimum amounts of available soil water at wheat planting
USDA-ARS?s Scientific Manuscript database
Winter wheat (Triticum aestivum L.)-fallow (WF) remains a prominent cropping system throughout the Central Great Plains despite documentation confirming the inefficiency of precipitation storage during the second summer fallow period. Wheat yield is greatly influenced by available soil water at plan...
Neural network evaluation of tokamak current profiles for real time control
NASA Astrophysics Data System (ADS)
Wróblewski, Dariusz
1997-02-01
Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q0, minimum value of q, qmin, and the location of qmin. Very good performance of the trained neural network both for simulated test data and for experimental datais demonstrated.
Neural network evaluation of tokamak current profiles for real time control (abstract)
NASA Astrophysics Data System (ADS)
Wróblewski, Dariusz
1997-01-01
Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q0, minimum value of q, qmin, and the location of qmin. Very good performance of the trained neural network both for simulated test data and for experimental data is demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Peter C.; Roos, Daniel E.; Pratt, Gary
2006-02-01
Purpose: To assess, in a multicenter setting, the long-term outcomes of a brief course of high-dose methotrexate followed by radiotherapy for patients with primary central nervous system lymphoma (PCNSL). Methods and Materials: Forty-six patients were entered in a Phase II protocol consisting of methotrexate (1 g/m{sup 2} on Days 1 and 8), followed by whole-brain irradiation (45-50.4 Gy). The median follow-up time was 7 years, with a minimum follow-up of 5 years. Results: The 5-year survival estimate was 37% ({+-}14%, 95% confidence interval [CI]), with progression-free survival being 36% ({+-}15%, 95% CI), and median survival 36 months. Of the originalmore » 46 patients, 10 were alive, all without evidence of disease recurrence. A total of 11 patients have developed neurotoxicity, with the actuarial risk being 30% ({+-}18%, 95% CI) at 5 years but continuing to increase. For patients aged >60 years the risk of neurotoxicity at 7 years was 58% ({+-}30%, 95% CI). Conclusion: Combined-modality therapy, based on high-dose methotrexate, results in improved survival outcomes in PCNSL. The risk of neurotoxicity for patients aged >60 years is unacceptable with this regimen, although survival outcomes for patients aged >60 years were higher than in many other series.« less
Portfolio optimization and the random magnet problem
NASA Astrophysics Data System (ADS)
Rosenow, B.; Plerou, V.; Gopikrishnan, P.; Stanley, H. E.
2002-08-01
Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk.
Estimation of additive forces and moments for supersonic inlets
NASA Technical Reports Server (NTRS)
Perkins, Stanley C., Jr.; Dillenius, Marnix F. E.
1991-01-01
A technique for estimating the additive forces and moments associated with supersonic, external compression inlets as a function of mass flow ratio has been developed. The technique makes use of a low order supersonic paneling method for calculating minimum additive forces at maximum mass flow conditions. A linear relationship between the minimum additive forces and the maximum values for fully blocked flow is employed to obtain the additive forces at a specified mass flow ratio. The method is applicable to two-dimensional inlets at zero or nonzero angle of attack, and to axisymmetric inlets at zero angle of attack. Comparisons with limited available additive drag data indicate fair to good agreement.
Auger electron and characteristic energy loss spectra for electro-deposited americium-241
NASA Astrophysics Data System (ADS)
Varma, Matesh N.; Baum, John W.
1983-07-01
Auger electron energy spectra for electro-deposited americium-241 on platinum substrate were obtained using a cylindrical mirror analyzer. Characteristic energy loss spectra for this sample were also obtained at primary electron beam energies of 990 and 390 eV. From these measurements PI, PII, and PIII energy levels for americium-241 are determined. Auger electron energies are compared with theoretically calculated values. Minimum detectability under the present condition of sample preparation and equipment was estimated at approximately 1.2×10-8 g/cm2 or 3.9×10-8 Ci/cm2. Minimum detectability for plutonium-239 under similar conditions was estimated at about 7.2×10-10 Ci/cm2.
NASA Astrophysics Data System (ADS)
Mottram, Catherine M.; Parrish, Randall R.; Regis, Daniele; Warren, Clare J.; Argles, Tom W.; Harris, Nigel B. W.; Roberts, Nick M. W.
2015-07-01
Quantitative constraints on the rates of tectonic processes underpin our understanding of the mechanisms that form mountains. In the Sikkim Himalaya, late structural doming has revealed time-transgressive evidence of metamorphism and thrusting that permit calculation of the minimum rate of movement on a major ductile fault zone, the Main Central Thrust (MCT), by a novel methodology. U-Th-Pb monazite ages, compositions, and metamorphic pressure-temperature determinations from rocks directly beneath the MCT reveal that samples from 50 km along the transport direction of the thrust experienced similar prograde, peak, and retrograde metamorphic conditions at different times. In the southern, frontal edge of the thrust zone, the rocks were buried to conditions of 550°C and 0.8 GPa between 21 and 18 Ma along the prograde path. Peak metamorphic conditions of 650°C and 0.8-1.0 GPa were subsequently reached as this footwall material was underplated to the hanging wall at 17-14 Ma. This same process occurred at analogous metamorphic conditions between 18-16 Ma and 14.5-13 Ma in the midsection of the thrust zone and between 13 Ma and 12 Ma in the northern, rear edge of the thrust zone. Northward younging muscovite 40Ar/39Ar ages are consistently 4 Ma younger than the youngest monazite ages for equivalent samples. By combining the geochronological data with the >50 km minimum distance separating samples along the transport axis, a minimum average thrusting rate of 10 ± 3 mm yr-1 can be calculated. This provides a minimum constraint on the amount of Miocene India-Asia convergence that was accommodated along the MCT.
The 2014 X-Ray Minimum of η Carinae as Seen by Swift
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corcoran, M. F.; Hamaguchi, K.; Liburd, J.
We report on Swift X-ray Telescope observations of Eta Carinae ( η Car), an extremely massive, long-period, highly eccentric binary obtained during the 2014.6 X-ray minimum/periastron passage. These observations show that η Car may have been particularly bright in X-rays going into the X-ray minimum state, while the duration of the 2014 X-ray minimum was intermediate between the extended minima seen in 1998.0 and 2003.5 by Rossi X-Ray Timing Explorer ( RXTE ), and the shorter minimum in 2009.0. The hardness ratios derived from the Swift observations showed a relatively smooth increase to a peak value occurring 40.5 days aftermore » the start of the X-ray minimum, though these observations cannot reliably measure the X-ray hardness during the deepest part of the X-ray minimum when contamination by the “central constant emission” component is significant. By comparing the timings of the RXTE and Swift observations near the X-ray minima, we derive an updated X-ray period of P {sub X} = 2023.7 ± 0.7 days, in good agreement with periods derived from observations at other wavelengths, and we compare the X-ray changes with variations in the He ii 4686 emission. The middle of the “Deep Minimum” interval, as defined by the Swift column density variations, is in good agreement with the time of periastron passage derived from the He ii λ 4686 line variations.« less
AOAC SMPR 2015.009: Estimation of total phenolic content using Folin-C Assay
USDA-ARS?s Scientific Manuscript database
This AOAC Standard Method Performance Requirements (SMPR) is for estimation of total soluble phenolic content in dietary supplement raw materials and finished products using the Folin-C assay for comparison within same matrices. SMPRs describe the minimum recommended performance characteristics to b...
ELECTROFISHING DISTANCE NEEDED TO ESTIMATE FISH SPECIES RICHNESS IN RAFTABLE WESTERN USA RIVERS
A critical issue in river monitoring is the minimum amount of sampling distance required to adequately represent the fish assemblage of a reach. Determining adequate sampling distance is important because it affects estimates of fish assemblage integrity and diversity at local a...
32 CFR 218.4 - Dose estimate reporting standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...
32 CFR 218.4 - Dose estimate reporting standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...
32 CFR 218.4 - Dose estimate reporting standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...
32 CFR 218.4 - Dose estimate reporting standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...
32 CFR 218.4 - Dose estimate reporting standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...
NASA Technical Reports Server (NTRS)
Dong, Xiquan; Xi, Baike; Minnis, Patrick
2006-01-01
Data collected at the Department of Energy Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) central facility are analyzed for determining the variability of cloud fraction and radiative forcing at several temporal scales between January 1997 and December 2002. Cloud fractions are estimated for total cloud cover and for single-layer low (0-3 km), middle (3-6 km), and high clouds (greater than 6 km) using ARM SGP ground-based paired lidar-radar measurements. Shortwave (SW), longwave (LW), and net cloud radiative forcings (CRF) are derived from up- and down-looking standard precision spectral pyranometers and precision infrared radiometer measurements. The annual averages of total, and single-layer, nonoverlapped low, middle and high cloud fractions are 0.49, 0.11, 0.03, and 0.17, respectively. Total and low cloud amounts were greatest from December through March and least during July and August. The monthly variation of high cloud amount is relatively small with a broad maximum from May to August. During winter, total cloud cover varies diurnally with a small amplitude, mid-morning maximum and early evening minimum, and during summer it changes by more than 0.14 over the daily cycle with a pronounced early evening minimum. The diurnal variations of mean single-layer cloud cover change with season and cloud height. Annual averages of all-sky, total, and single-layer high, middle, and low LW CRFs are 21.4, 40.2, 16.7, 27.2, and 55.0 Wm(sup -2), respectively; and their SW CRFs are -41.5, -77.2, -37.0, -47.0, and -90.5 Wm(sup -2). Their net CRFs range from -20 to -37 Wm(sup -2). For all-sky, total, and low clouds, the maximum negative net CRFs of -40.1, -70, and -69.5 Wm(sup -2), occur during April; while the respective minimum values of -3.9, -5.7, and -4.6 Wm(sup -2), are found during December. July is the month having maximum negative net CRF of -46.2 Wm(sup -2) for middle clouds, and May has the maximum value of -45.9 Wm(sup -2) for high clouds. An uncertainty analysis demonstrates that the calculated CRFs are not significantly affected by the difference between clear-sky and cloudy conditions. A more comprehensive cloud fraction study from both surface and satellite observations will follow.
Gómez-Polo, Cristina; Gómez-Polo, Miguel; Celemín Viñuela, Alicia; Martínez Vázquez de Parga, Juan Antonio
2015-03-01
The 3D-Master System comprises 26 physical shade tabs and intermediate shades. Determining the relationship among all the groups of lightness, chroma, and hue of the 3D-Master System (Vita Zahnfabrik) and the L*, C*, and h* coordinates is important, because according to the manufacturer, 2 Toothguide 3D-Master shades need to be mixed in a 50:50 ratio to create an intermediate shade. The purpose of the study was to relate the lightness, chroma, and hue groups of the 3D-Master System with the polar coordinates of the CIELAB chromatic space, L*, C*, and h*, and to quantify the shades tabs and intermediate shades of the 3D-Master System according to color coordinates. The middle third of the facial surface of a natural maxillary central incisor was measured with an Easyshade Compact spectrophotometer (Vita Zahnfabrik) in 1361 Spanish participants aged between 16 and 89 years. Natural tooth color was recorded in the 3D-Master nomenclature and in the CIE L*, C*, and h* coordinates system. The program used for the present descriptive statistical analysis of the results was SAS 9.1.3. In the L* variable, the minimum was found at 47.0 and the maximum at 91.3. In the C* variable, the minimum was found at 5.9 and the maximum at 49.8, while for h*, the minimum was 67.5 degrees and the maximum 112.0 degrees. Despite the limitations of this study, the 3D-Master System was found to be arranged according to L*, C*, and h* coordinates in groups of lightness, chroma, and hue. The corresponding groups of lightness, chroma, and hue can be estimated on the basis of L*, C*, and h* coordinates. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Diffusion-model analysis of pPb and PbPb collisions at LHC energies
NASA Astrophysics Data System (ADS)
Schulz, P.; Wolschin, G.
2018-06-01
We present an analysis of centrality-dependent pseudorapidity distributions of produced charged hadrons in pPb and PbPb collisions at the Large Hadron Collider (LHC) energy of s NN = 5.02 TeV, and of minimum-bias pPb collisions at 8.16 TeV within the non-equilibrium-statistical relativistic diffusion model (RDM). In a three-source approach, the role of the fragmentation sources is emphasized. Together with the Jacobian transformation from rapidity to pseudorapidity and the limiting fragmentation conjecture, these are essential for modeling the centrality dependence. For central PbPb collisions, a prediction at the projected FCC energy of s NN = 39 TeV is made.
Nocturnal Oviposition Behavior of Forensically Important Diptera in Central England.
Barnes, Kate M; Grace, Karon A; Bulling, Mark T
2015-11-01
Timing of oviposition on a corpse is a key factor in entomologically based minimum postmortem interval (mPMI) calculations. However, there is considerable variation in nocturnal oviposition behavior of blow flies reported in the research literature. This study investigated nocturnal oviposition in central England for the first time, over 25 trials from 2011 to 2013. Liver-baited traps were placed in an urban location during control (diurnal), and nocturnal periods and environmental conditions were recorded during each 5-h trial. No nocturnal activity or oviposition was observed during the course of the study indicating that nocturnal oviposition is highly unlikely in central England. © 2015 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Budzanowski, A.; Busza, W.; Carroll, A.; Decowski, M. P.; García, E.; George, N. K.; Gulbrandsen, K.; Gushue, S.; Halliwell, C.; Hamblen, J.; Heintzelman, G. A.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Hołyński, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Katzy, J.; Khan, N.; Kucewicz, W.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; McLeod, D.; Mignerey, A. C.; Nguyen, M.; Nouicer, R.; Olszewski, A.; Pak, R.; Park, I. C.; Pernegger, H.; Reed, C.; Remsberg, L. P.; Reuter, M.; Roland, C.; Roland, G.; Rosenberg, L.; Sagerer, J.; Sarin, P.; Sawicki, P.; Skulski, W.; Steinberg, P.; Stephans, G. S.; Sukhanov, A.; Tang, J.-L.; Tonjes, M. B.; Trzupek, A.; Vale, C. M.; Nieuwenhuizen, G. J.; Verdier, R.; Veres, G. I.; Wolfs, F. L.; Wosiek, B.; Woźniak, K.; Wuosmaa, A. H.; Wysłouch, B.
2005-11-01
This Rapid Communication describes the measurement of elliptic flow for charged particles in Au+Au collisions at √(sNN)=200 GeV using the PHOBOS detector at the Relativistic Heavy Ion Collider. The measured azimuthal anisotropy is presented over a wide range of pseudorapidity for three broad collision centrality classes for the first time at this energy. Two distinct methods of extracting the flow signal were used to reduce systematic uncertainties. The elliptic flow falls sharply with increasing |η| at 200 GeV for all the centralities studied, as observed for minimum-bias collisions at √(sNN)=130 GeV.
Fisher information of a single qubit interacts with a spin-qubit in the presence of a magnetic field
NASA Astrophysics Data System (ADS)
Metwally, N.
2018-06-01
In this contribution, quantum Fisher information is utilized to estimate the parameters of a central qubit interacting with a single-spin qubit. The effect of the longitudinal, transverse and the rotating strengths of the magnetic field on the estimation degree is discussed. It is shown that, in the resonance case, the number of peaks and consequently the size of the estimation regions increase as the rotating magnetic field strength increases. The precision estimation of the central qubit parameters depends on the initial state settings of the central and the spin-qubit, either encode classical or quantum information. It is displayed that, the upper bounds of the estimation degree are large if the two qubits encode classical information. In the non-resonance case, the estimation degree depends on which of the longitudinal/transverse strength is larger. The coupling constant between the central qubit and the spin-qubit has a different effect on the estimation degree of the weight and the phase parameters, where the possibility of estimating the weight parameter decreases as the coupling constant increases, while it increases for the phase parameter. For large number of spin-particles, namely, we have a spin-bath particles, the upper bounds of the Fisher information with respect to the weight parameter of the central qubit decreases as the number of the spin particle increases. As the interaction time increases, the upper bounds appear at different initial values of the weight parameter.
Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua
2018-05-01
High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Estimation of the transmissivity of thin leaky-confined aquifers from single-well pumping tests
NASA Astrophysics Data System (ADS)
Worthington, Paul F.
1981-01-01
Data from the quasi-equilibrium phases of a step-drawdown test are used to evaluate the coefficient of non-linear head losses subject to the assumption of a constant effective well radius. After applying a well-loss correction to the observed drawdowns of the first step, an approximation method is used to estimate a pseudo-transmissivity of the aquifer from a single value of time-variant drawdown. The pseudo-transmissivities computed for each of a sequence of values of time pass through a minimum when there is least manifestation of casing-storage and leakage effects, phenomena to which pumping-test data of this kind are particularly susceptible. This minimum pseudo-transmissivity, adjusted for partial penetration effects where appropriate, constitutes the best possible estimate of aquifer transmissivity. The ease of application of the overall procedure is illustrated by a practical example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, A H; Kerr, L A; Cailliet, G M
2007-11-04
Canary rockfish (Sebastes pinniger) have long been an important part of recreational and commercial rockfish fishing from southeast Alaska to southern California, but localized stock abundances have declined considerably. Based on age estimates from otoliths and other structures, lifespan estimates vary from about 20 years to over 80 years. For the purpose of monitoring stocks, age composition is routinely estimated by counting growth zones in otoliths; however, age estimation procedures and lifespan estimates remain largely unvalidated. Typical age validation techniques have limited application for canary rockfish because they are deep dwelling and may be long lived. In this study, themore » unaged otolith of the pair from fish aged at the Department of Fisheries and Oceans Canada was used in one of two age validation techniques: (1) lead-radium dating and (2) bomb radiocarbon ({sup 14}C) dating. Age estimate accuracy and the validity of age estimation procedures were validated based on the results from each technique. Lead-radium dating proved successful in determining a minimum estimate of lifespan was 53 years and provided support for age estimation procedures up to about 50-60 years. These findings were further supported by {Delta}{sup 14}C data, which indicated a minimum estimate of lifespan was 44 {+-} 3 years. Both techniques validate, to differing degrees, age estimation procedures and provide support for inferring that canary rockfish can live more than 80 years.« less
Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera
Fu, Longsheng; Sun, Shipeng; Li, Rui; Wang, Shaojin
2016-01-01
This study aims to demonstrate the feasibility for classifying kiwifruit into shape grades by adding a single camera to current Chinese sorting lines equipped with weight sensors. Image processing methods are employed to calculate fruit length, maximum diameter of the equatorial section, and projected area. A stepwise multiple linear regression method is applied to select significant variables for predicting minimum diameter of the equatorial section and volume and to establish corresponding estimation models. Results show that length, maximum diameter of the equatorial section and weight are selected to predict the minimum diameter of the equatorial section, with the coefficient of determination of only 0.82 when compared to manual measurements. Weight and length are then selected to estimate the volume, which is in good agreement with the measured one with the coefficient of determination of 0.98. Fruit classification based on the estimated minimum diameter of the equatorial section achieves a low success rate of 84.6%, which is significantly improved using a linear combination of the length/maximum diameter of the equatorial section and projected area/length ratios, reaching 98.3%. Thus, it is possible for Chinese kiwifruit sorting lines to reach international standards of grading kiwifruit on fruit shape classification by adding a single camera. PMID:27376292
NASA Astrophysics Data System (ADS)
Smith, James F.
2017-11-01
With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.
Archer, Roger J.
1978-01-01
Minimum average 7-day, 10-year flow at 67 gaging stations and 173 partial-record stations in the Hudson River basin are given in tabular form. Variation of the 7-day, 10-year low flow from point to point in selected reaches, and the corresponding times of travel, are shown graphically for Wawayanda Creek, Wallkill River, Woodbury-Moodna Creek, and the Fishkill Creek basins. The 7-day, 10-year low flow for the Saw Kill basin, and estimates of the 7-day, 10-year low flow of the Roeliff Jansen Kill at Ancram and of Birch Creek at Pine Hill, are given. Summaries of discharge from Rondout and Ashokan Reservoirs, in Ulster County, are also included. Minimum average 7-day, 10-year flow for gaging stations with 10 years or more of record were determined by log-Pearson Type III computation; those for partial-record stations were developed by correlation of discharge measurements made at the partial-record stations with discharge data from appropriate long-term gaging stations. The variation in low flows from point to point within the selected subbasins were estimated from available data and regional regression formula. Time of travel at these flows in the four subbasins was estimated from available data and Boning's equations.
Setting population targets for mammals using body mass as a predictor of population persistence.
Hilbers, Jelle P; Santini, Luca; Visconti, Piero; Schipper, Aafke M; Pinto, Cecilia; Rondinini, Carlo; Huijbregts, Mark A J
2017-04-01
Conservation planning and biodiversity assessments need quantitative targets to optimize planning options and assess the adequacy of current species protection. However, targets aiming at persistence require population-specific data, which limit their use in favor of fixed and nonspecific targets, likely leading to unequal distribution of conservation efforts among species. We devised a method to derive equitable population targets; that is, quantitative targets of population size that ensure equal probabilities of persistence across a set of species and that can be easily inferred from species-specific traits. In our method, we used models of population dynamics across a range of life-history traits related to species' body mass to estimate minimum viable population targets. We applied our method to a range of body masses of mammals, from 2 g to 3825 kg. The minimum viable population targets decreased asymptotically with increasing body mass and were on the same order of magnitude as minimum viable population estimates from species- and context-specific studies. Our approach provides a compromise between pragmatic, nonspecific population targets and detailed context-specific estimates of population viability for which only limited data are available. It enables a first estimation of species-specific population targets based on a readily available trait and thus allows setting equitable targets for population persistence in large-scale and multispecies conservation assessments and planning. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Lobit, P.; Gómez Tagle, A.; Bautista, F.; Lhomme, J. P.
2017-07-01
We evaluated two methods to estimate evapotranspiration (ETo) from minimal weather records (daily maximum and minimum temperatures) in Mexico: a modified reduced set FAO-Penman-Monteith method (Allen et al. 1998, Rome, Italy) and the Hargreaves and Samani (Appl Eng Agric 1(2): 96-99, 1985) method. In the reduced set method, the FAO-Penman-Monteith equation was applied with vapor pressure and radiation estimated from temperature data using two new models (see first and second articles in this series): mean temperature as the average of maximum and minimum temperature corrected for a constant bias and constant wind speed. The Hargreaves-Samani method combines two empirical relationships: one between diurnal temperature range ΔT and shortwave radiation Rs, and another one between average temperature and the ratio ETo/Rs: both relationships were evaluated and calibrated for Mexico. After performing a sensitivity analysis to evaluate the impact of different approximations on the estimation of Rs and ETo, several model combinations were tested to predict ETo from daily maximum and minimum temperature alone. The quality of fit of these models was evaluated on 786 weather stations covering most of the territory of Mexico. The best method was found to be a combination of the FAO-Penman-Monteith reduced set equation with the new radiation estimation and vapor pressure model. As an alternative, a recalibration of the Hargreaves-Samani equation is proposed.
32 CFR 154.41 - Central adjudication.
Code of Federal Regulations, 2010 CFR
2010-07-01
... each adjudicative decision can have on a person's career and to ensure the maximum degree of fairness and equity in such actions, a minimum level of review shall be required for all clearance/access... completely favorable shall undergo at least two levels of review by adjudicative officials, the second of...
NASA Technical Reports Server (NTRS)
Goel, R.; Rosenberg, M. J.; De Dios, Y. E.; Cohen, H. S.; Bloomberg, J. J.; Mulavara, A. P.
2016-01-01
Sensorimotor changes such as posture and gait instabilities can affect the functional performance of astronauts after gravitational transitions. Sensorimotor Adaptability (SA) training can help alleviate decrements on exposure to novel sensorimotor environments based on the concept of 'learning to learn' by exposure to varying sensory challenges during posture and locomotion tasks (Bloomberg 2015). Supra-threshold Stochastic Vestibular Stimulation (SVS) can be used to provide one of many challenges by disrupting vestibular inputs. In this scenario, the central nervous system can be trained to utilize veridical information from other sensory inputs, such as vision and somatosensory inputs, for posture and locomotion control. The minimum amplitude of SVS to simulate the effect of deterioration in vestibular inputs for preflight training or for evaluating vestibular contribution in functional tests in general, however, has not yet been identified. Few studies (MacDougall 2006; Dilda 2014) have used arbitrary but fixed maximum current amplitudes from 3 to 5 mA in the medio-lateral (ML) direction to disrupt balance function in healthy adults. Giving this high level of current amplitude to all the individuals has a risk of invoking side effects such as nausea and discomfort. The goal of this study was to determine the minimum SVS level that yields an equivalently degraded balance performance. Thirteen subjects stood on a compliant foam surface with their eyes closed and were instructed to maintain a stable upright stance. Measures of stability of the head, trunk, and whole body were quantified in the ML direction. Duration of time they could stand on the foam surface was also measured. The minimum SVS dosage was defined to be that level which significantly degraded balance performance such that any further increase in stimulation level did not lead to further balance degradation. The minimum SVS level was determined by performing linear fits on the performance variable at different stimulation levels. Results from the balance task suggest that there are inter-individual differences and the minimum SVS amplitude was found to be in the range of 1 mA to 2.5 mA across subjects. SVS resulted in an average decrement of balance task performance in the range of 62%-73% across different measured variables at the minimum SVS amplitude in comparison to the control trial (no stimulus). Training using supra-threshold SVS stimulation is one of the sensory challenges used for preflight SA training designed to improve adaptability to novel gravitational environments. Inter-individual differences in response to SVS can help customize the SA training paradigms using minimal dosage required. Another application of using SVS is to simulate acute deterioration of vestibular sensory inputs in the evaluation of tests for assessing vestibular function.
NASA Astrophysics Data System (ADS)
Adamczyk, L.; Adkins, J. K.; Agakishiev, G.; Aggarwal, M. M.; Ahammed, Z.; Alekseev, I.; Aparin, A.; Arkhipkin, D.; Aschenauer, E. C.; Averichev, G. S.; Bai, X.; Bairathi, V.; Banerjee, A.; Bellwied, R.; Bhasin, A.; Bhati, A. K.; Bhattarai, P.; Bielcik, J.; Bielcikova, J.; Bland, L. C.; Bordyuzhin, I. G.; Bouchet, J.; Brandenburg, D.; Brandin, A. V.; Bunzarov, I.; Butterworth, J.; Caines, H.; Calderón de la Barca Sánchez, M.; Campbell, J. M.; Cebra, D.; Cervantes, M. C.; Chakaberia, I.; Chaloupka, P.; Chang, Z.; Chattopadhyay, S.; Chen, X.; Chen, J. H.; Cheng, J.; Cherney, M.; Chisman, O.; Christie, W.; Contin, G.; Crawford, H. J.; Das, S.; De Silva, L. C.; Debbe, R. R.; Dedovich, T. G.; Deng, J.; Derevschikov, A. A.; di Ruzza, B.; Didenko, L.; Dilks, C.; Dong, X.; Drachenberg, J. L.; Draper, J. E.; Du, C. M.; Dunkelberger, L. E.; Dunlop, J. C.; Efimov, L. G.; Engelage, J.; Eppley, G.; Esha, R.; Evdokimov, O.; Eyser, O.; Fatemi, R.; Fazio, S.; Federic, P.; Fedorisin, J.; Feng, Z.; Filip, P.; Fisyak, Y.; Flores, C. E.; Fulek, L.; Gagliardi, C. A.; Garand, D.; Geurts, F.; Gibson, A.; Girard, M.; Greiner, L.; Grosnick, D.; Gunarathne, D. S.; Guo, Y.; Gupta, A.; Gupta, S.; Guryn, W.; Hamad, A.; Hamed, A.; Haque, R.; Harris, J. W.; He, L.; Heppelmann, S.; Heppelmann, S.; Hirsch, A.; Hoffmann, G. W.; Hofman, D. J.; Horvat, S.; Huang, H. Z.; Huang, B.; Huang, X.; Huck, P.; Humanic, T. J.; Igo, G.; Jacobs, W. W.; Jang, H.; Jiang, K.; Judd, E. G.; Kabana, S.; Kalinkin, D.; Kang, K.; Kauder, K.; Ke, H. W.; Keane, D.; Kechechyan, A.; Khan, Z. H.; Kikoła, D. P.; Kisel, I.; Kisiel, A.; Kochenda, L.; Koetke, D. D.; Kollegger, T.; Kosarzewski, L. K.; Kraishan, A. F.; Kravtsov, P.; Krueger, K.; Kulakov, I.; Kumar, L.; Kycia, R. A.; Lamont, M. A. C.; Landgraf, J. M.; Landry, K. D.; Lauret, J.; Lebedev, A.; Lednicky, R.; Lee, J. H.; Li, X.; Li, Y.; Li, W.; Li, C.; Li, X.; Li, Z. M.; Lisa, M. A.; Liu, F.; Ljubicic, T.; Llope, W. J.; Lomnitz, M.; Longacre, R. S.; Luo, X.; Ma, L.; Ma, Y. G.; Ma, G. L.; Ma, R.; Magdy, N.; Majka, R.; Manion, A.; Margetis, S.; Markert, C.; Masui, H.; Matis, H. S.; McDonald, D.; Meehan, K.; Minaev, N. G.; Mioduszewski, S.; Mishra, D.; Mohanty, B.; Mondal, M. M.; Morozov, D. A.; Mustafa, M. K.; Nandi, B. K.; Nasim, Md.; Nayak, T. K.; Nigmatkulov, G.; Niida, T.; Nogach, L. V.; Noh, S. Y.; Novak, J.; Nurushev, S. B.; Odyniec, G.; Ogawa, A.; Oh, K.; Okorokov, V.; Olvitt, D.; Page, B. S.; Pak, R.; Pan, Y. X.; Pandit, Y.; Panebratsev, Y.; Pawlik, B.; Pei, H.; Perkins, C.; Peterson, A.; Pile, P.; Planinic, M.; Pluta, J.; Poljak, N.; Poniatowska, K.; Porter, J.; Posik, M.; Poskanzer, A. M.; Pruthi, N. K.; Putschke, J.; Qiu, H.; Quintero, A.; Ramachandran, S.; Raniwala, S.; Raniwala, R.; Ray, R. L.; Ritter, H. G.; Roberts, J. B.; Rogachevskiy, O. V.; Romero, J. L.; Roy, A.; Ruan, L.; Rusnak, J.; Rusnakova, O.; Sahoo, N. R.; Sahu, P. K.; Salur, S.; Sandweiss, J.; Sarkar, A.; Schambach, J.; Scharenberg, R. P.; Schmah, A. M.; Schmidke, W. B.; Schmitz, N.; Seger, J.; Seyboth, P.; Shah, N.; Shahaliev, E.; Shanmuganathan, P. V.; Shao, M.; Sharma, B.; Sharma, M. K.; Shen, W. Q.; Shi, S. S.; Shou, Q. Y.; Sichtermann, E. P.; Sikora, R.; Simko, M.; Singha, S.; Skoby, M. J.; Smirnov, N.; Smirnov, D.; Song, L.; Sorensen, P.; Spinka, H. M.; Srivastava, B.; Stanislaus, T. D. S.; Stepanov, M.; Stock, R.; Strikhanov, M.; Stringfellow, B.; Sumbera, M.; Summa, B.; Sun, X.; Sun, Z.; Sun, Y.; Sun, X. M.; Surrow, B.; Svirida, N.; Szelezniak, M. A.; Tang, Z.; Tang, A. H.; Tarnowsky, T.; Tawfik, A.; Thäder, J.; Thomas, J. H.; Timmins, A. R.; Tlusty, D.; Tokarev, M.; Trentalange, S.; Tribble, R. E.; Tribedy, P.; Tripathy, S. K.; Trzeciak, B. A.; Tsai, O. D.; Ullrich, T.; Underwood, D. G.; Upsal, I.; Van Buren, G.; van Nieuwenhuizen, G.; Vandenbroucke, M.; Varma, R.; Vasiliev, A. N.; Vertesi, R.; Videbæk, F.; Viyogi, Y. P.; Vokal, S.; Voloshin, S. A.; Vossen, A.; Wang, F.; Wang, Y.; Wang, G.; Wang, Y.; Wang, J. S.; Wang, H.; Webb, J. C.; Webb, G.; Wen, L.; Westfall, G. D.; Wieman, H.; Wissink, S. W.; Witt, R.; Wu, Y. F.; Wu, Y.; Xiao, Z. G.; Xie, W.; Xin, K.; Xu, Z.; Xu, H.; Xu, Y. F.; Xu, Q. H.; Xu, N.; Yang, Y.; Yang, C.; Yang, S.; Yang, Y.; Yang, Q.; Ye, Z.; Ye, Z.; Yepes, P.; Yi, L.; Yip, K.; Yoo, I.-K.; Yu, N.; Zbroszczyk, H.; Zha, W.; Zhang, J. B.; Zhang, Y.; Zhang, S.; Zhang, J.; Zhang, J.; Zhang, Z.; Zhang, X. P.; Zhao, J.; Zhong, C.; Zhou, L.; Zhu, X.; Zoulkarneeva, Y.; Zyzak, M.; STAR Collaboration
2016-01-01
Elliptic flow (v2) values for identified particles at midrapidity in Au + Au collisions measured by the STAR experiment in the Beam Energy Scan at the Relativistic Heavy Ion Collider at √{sN N}= 7.7 -62.4 GeV are presented for three centrality classes. The centrality dependence and the data at √{sN N}= 14.5 GeV are new. Except at the lowest beam energies, we observe a similar relative v2 baryon-meson splitting for all centrality classes which is in agreement within 15% with the number-of-constituent quark scaling. The larger v2 for most particles relative to antiparticles, already observed for minimum bias collisions, shows a clear centrality dependence, with the largest difference for the most central collisions. Also, the results are compared with a multiphase transport (AMPT) model and fit with a blast wave model.
Adamczyk, L.; Adkins, J. K.; Agakishiev, G.; ...
2016-01-19
Here, elliptic flow (v 2) values for identified particles at midrapidity in Au + Au collisions measured by the STAR experiment in the Beam Energy Scan at the Relativistic Heavy Ion Collider at √s NN = 7.7–62.4 GeV are presented for three centrality classes. The centrality dependence and the data at √s NN = 14.5 GeV are new. Except at the lowest beam energies, we observe a similar relative v 2 baryon-meson splitting for all centrality classes which is in agreement within 15% with the number-of-constituent quark scaling. The larger v 2 for most particles relative to antiparticles, already observedmore » for minimum bias collisions, shows a clear centrality dependence, with the largest difference for the most central collisions. Also, the results are compared with a multiphase transport (AMPT) model and fit with a blast wave model.« less
Wheeler, Russell L.
2014-01-01
Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.
USDA-ARS?s Scientific Manuscript database
Research was conducted in northern Colorado in 2011 to estimate the Crop Water Stress Index (CWSI) and actual water transpiration (Ta) of maize under a range of irrigation regimes. The main goal was to obtain these parameters with minimum instrumentation and measurements. The results confirmed that ...
State Estimation for Linear Systems Driven Simultaneously by Wiener and Poisson Processes.
1978-12-01
The state estimation problem of linear stochastic systems driven simultaneously by Wiener and Poisson processes is considered, especially the case...where the incident intensities of the Poisson processes are low and the system is observed in an additive white Gaussian noise. The minimum mean squared
Applications of harvesting system simulation to timber management and utilization analyses
John E. Baumgras; Chris B. LeDoux
1990-01-01
Applications of timber harvesting system simulation to the economic analysis of forest management and wood utilization practices are presented. These applications include estimating thinning revenue by stand age, estimating impacts of minimum merchantable tree diameter on harvesting revenue, and evaluating wood utilization alternatives relative to pulpwood quotas and...
Sampling effort needed to estimate condition and species richness in the Ohio River, USA
The level of sampling effort required to characterize fish assemblage condition in a river for the purposes of bioassessment may be estimated via different approaches. However, the goal with any approach is to determine the minimum level of effort necessary to reach some specific...
NASA Astrophysics Data System (ADS)
Cartwright, I.; Gilfedder, B.; Hofmann, H.
2014-01-01
This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. During the early stages of high-discharge events, the chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those based on chemical mass balance using Cl calculated from continuous electrical conductivity measurements. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of the annual discharge with a net baseflow contribution of 16% of total discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of discharge annually with a net baseflow contribution between 2001 and 2011 of 35% of total discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge and 26% of total discharge). These differences most probably reflect how the different techniques characterise baseflow. The local minimum and recursive digital filters probably aggregate much of the water from delayed sources as baseflow. However, as many delayed transient water stores (such as bank return flow, floodplain storage, or interflow) are likely to be geochemically similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The difference between the estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months at that time. Cl vs. discharge variations during individual flow events also demonstrate that inflows of high-salinity older water occurs on the rising limbs of hydrographs followed by inflows of low-salinity water from the transient stores as discharge falls. The joint use of complementary techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.
... those descibed below. Estimated Oral Fluid and Electrolyte Requirements by Body Weight Body Weight (in pounds) Minimum Daily Fluid Requirements (in ounces)* Electrolyte Solution Requirements for Mild Diarrhea ( ...
Estimation of Nasal Tip Support Using Computer-Aided Design and 3-Dimensional Printed Models
Gray, Eric; Maducdoc, Marlon; Manuel, Cyrus; Wong, Brian J. F.
2016-01-01
IMPORTANCE Palpation of the nasal tip is an essential component of the preoperative rhinoplasty examination. Measuring tip support is challenging, and the forces that correspond to ideal tip support are unknown. OBJECTIVE To identify the integrated reaction force and the minimum and ideal mechanical properties associated with nasal tip support. DESIGN, SETTING, AND PARTICIPANTS Three-dimensional (3-D) printed anatomic silicone nasal models were created using a computed tomographic scan and computer-aided design software. From this model, 3-D printing and casting methods were used to create 5 anatomically correct nasal models of varying constitutive Young moduli (0.042, 0.086, 0.098, 0.252, and 0.302 MPa) from silicone. Thirty rhinoplasty surgeons who attended a regional rhinoplasty course evaluated the reaction force (nasal tip recoil) of each model by palpation and selected the model that satisfied their requirements for minimum and ideal tip support. Data were collected from May 3 to 4, 2014. RESULTS Of the 30 respondents, 4 surgeons had been in practice for 1 to 5 years; 9 surgeons, 6 to 15 years; 7 surgeons, 16 to 25 years; and 10 surgeons, 26 or more years. Seventeen surgeons considered themselves in the advanced to expert skill competency levels. Logistic regression estimated the minimum threshold for the Young moduli for adequate and ideal tip support to be 0.096 and 0.154 MPa, respectively. Logistic regression estimated the thresholds for the reaction force associated with the absolute minimum and ideal requirements for good tip recoil to be 0.26 to 4.74 N and 0.37 to 7.19 N during 1- to 8-mm displacement, respectively. CONCLUSIONS AND RELEVANCE This study presents a method to estimate clinically relevant nasal tip reaction forces, which serve as a proxy for nasal tip support. This information will become increasingly important in computational modeling of nasal tip mechanics and ultimately will enhance surgical planning for rhinoplasty. LEVEL OF EVIDENCE NA. PMID:27124818
Turner, Alan H; Pritchard, Adam C; Matzke, Nicholas J
2017-01-01
Estimating divergence times on phylogenies is critical in paleontological and neontological studies. Chronostratigraphically-constrained fossils are the only direct evidence of absolute timing of species divergence. Strict temporal calibration of fossil-only phylogenies provides minimum divergence estimates, and various methods have been proposed to estimate divergences beyond these minimum values. We explore the utility of simultaneous estimation of tree topology and divergence times using BEAST tip-dating on datasets consisting only of fossils by using relaxed morphological clocks and birth-death tree priors that include serial sampling (BDSS) at a constant rate through time. We compare BEAST results to those from the traditional maximum parsimony (MP) and undated Bayesian inference (BI) methods. Three overlapping datasets were used that span 250 million years of archosauromorph evolution leading to crocodylians. The first dataset focuses on early Sauria (31 taxa, 240 chars.), the second on early Archosauria (76 taxa, 400 chars.) and the third on Crocodyliformes (101 taxa, 340 chars.). For each dataset three time-calibrated trees (timetrees) were calculated: a minimum-age timetree with node ages based on earliest occurrences in the fossil record; a 'smoothed' timetree using a range of time added to the root that is then averaged over zero-length internodes; and a tip-dated timetree. Comparisons within datasets show that the smoothed and tip-dated timetrees provide similar estimates. Only near the root node do BEAST estimates fall outside the smoothed timetree range. The BEAST model is not able to overcome limited sampling to correctly estimate divergences considerably older than sampled fossil occurrence dates. Conversely, the smoothed timetrees consistently provide node-ages far older than the strict dates or BEAST estimates for morphologically conservative sister-taxa when they sit on long ghost lineages. In this latter case, the relaxed-clock model appears to be correctly moderating the node-age estimate based on the limited morphological divergence. Topologies are generally similar across analyses, but BEAST trees for crocodyliforms differ when clades are deeply nested but contain very old taxa. It appears that the constant-rate sampling assumption of the BDSS tree prior influences topology inference by disfavoring long, unsampled branches.
Turner, Alan H.; Pritchard, Adam C.; Matzke, Nicholas J.
2017-01-01
Estimating divergence times on phylogenies is critical in paleontological and neontological studies. Chronostratigraphically-constrained fossils are the only direct evidence of absolute timing of species divergence. Strict temporal calibration of fossil-only phylogenies provides minimum divergence estimates, and various methods have been proposed to estimate divergences beyond these minimum values. We explore the utility of simultaneous estimation of tree topology and divergence times using BEAST tip-dating on datasets consisting only of fossils by using relaxed morphological clocks and birth-death tree priors that include serial sampling (BDSS) at a constant rate through time. We compare BEAST results to those from the traditional maximum parsimony (MP) and undated Bayesian inference (BI) methods. Three overlapping datasets were used that span 250 million years of archosauromorph evolution leading to crocodylians. The first dataset focuses on early Sauria (31 taxa, 240 chars.), the second on early Archosauria (76 taxa, 400 chars.) and the third on Crocodyliformes (101 taxa, 340 chars.). For each dataset three time-calibrated trees (timetrees) were calculated: a minimum-age timetree with node ages based on earliest occurrences in the fossil record; a ‘smoothed’ timetree using a range of time added to the root that is then averaged over zero-length internodes; and a tip-dated timetree. Comparisons within datasets show that the smoothed and tip-dated timetrees provide similar estimates. Only near the root node do BEAST estimates fall outside the smoothed timetree range. The BEAST model is not able to overcome limited sampling to correctly estimate divergences considerably older than sampled fossil occurrence dates. Conversely, the smoothed timetrees consistently provide node-ages far older than the strict dates or BEAST estimates for morphologically conservative sister-taxa when they sit on long ghost lineages. In this latter case, the relaxed-clock model appears to be correctly moderating the node-age estimate based on the limited morphological divergence. Topologies are generally similar across analyses, but BEAST trees for crocodyliforms differ when clades are deeply nested but contain very old taxa. It appears that the constant-rate sampling assumption of the BDSS tree prior influences topology inference by disfavoring long, unsampled branches. PMID:28187191
Meier, Petra S; Holmes, John; Angus, Colin; Ally, Abdallah K; Meng, Yang; Brennan, Alan
2016-02-01
While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO "best buy" intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities. An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as follows: current tax increase, -3.2%; value-based tax, -2.9%; strength-based tax, -6.1%; minimum unit pricing, -7.8%) and lesser impacts among drinkers in professional/managerial occupations (for heavy drinkers: current tax increase, -1.3%; value-based tax, -1.4%; strength-based tax, +0.2%; minimum unit pricing, +0.8%). Results from the PSA give slightly greater mean effects for both the routine/manual (current tax increase, -3.6% [95% uncertainty interval (UI) -6.1%, -0.6%]; value-based tax, -3.3% [UI -5.1%, -1.7%]; strength-based tax, -7.5% [UI -13.7%, -3.9%]; minimum unit pricing, -10.3% [UI -10.3%, -7.0%]) and professional/managerial occupation groups (current tax increase, -1.8% [UI -4.7%, +1.6%]; value-based tax, -1.9% [UI -3.6%, +0.4%]; strength-based tax, -0.8% [UI -6.9%, +4.0%]; minimum unit pricing, -0.7% [UI -5.6%, +3.6%]). Impacts of price changes on moderate drinkers were small regardless of income or socioeconomic group. Analysis of uncertainty shows that the relative effectiveness of the four policies is fairly stable, although uncertainty in the absolute scale of effects exists. Volumetric taxation and minimum unit pricing consistently outperform increasing the current tax or adding an ad valorem tax in terms of reducing mortality among the heaviest drinkers and reducing alcohol-related health inequalities (e.g., in the routine/manual occupation group, volumetric taxation reduces deaths more than increasing the current tax in 26 out of 30 probabilistic runs, minimum unit pricing reduces deaths more than volumetric tax in 21 out of 30 runs, and minimum unit pricing reduces deaths more than increasing the current tax in 30 out of 30 runs). Study limitations include reducing model complexity by not considering a largely ineffective ban on below-tax alcohol sales, special duty rates covering only small shares of the market, and the impact of tax fraud or retailer non-compliance with minimum unit prices. Our model estimates that, compared to tax increases under the current system or introducing taxation based on product value, alcohol-content-based taxation or minimum unit pricing would lead to larger reductions in health inequalities across income groups. We also estimate that alcohol-content-based taxation and minimum unit pricing would have the largest impact on harmful drinking, with minimal effects on those drinking in moderation.
Powell, L.A.; Conroy, M.J.; Hines, J.E.; Nichols, J.D.; Krementz, D.G.
2000-01-01
Biologists often estimate separate survival and movement rates from radio-telemetry and mark-recapture data from the same study population. We describe a method for combining these data types in a single model to obtain joint, potentially less biased estimates of survival and movement that use all available data. We furnish an example using wood thrushes (Hylocichla mustelina) captured at the Piedmont National Wildlife Refuge in central Georgia in 1996. The model structure allows estimation of survival and capture probabilities, as well as estimation of movements away from and into the study area. In addition, the model structure provides many possibilities for hypothesis testing. Using the combined model structure, we estimated that wood thrush weekly survival was 0.989 ? 0.007 ( ?SE). Survival rates of banded and radio-marked individuals were not different (alpha hat [S_radioed, ~ S_banded]=log [S hat _radioed/ S hat _banded]=0.0239 ? 0.0435). Fidelity rates (weekly probability of remaining in a stratum) did not differ between geographic strata (psi hat=0.911 ? 0.020; alpha hat [psi11, psi22]=0.0161 ? 0.047), and recapture rates ( = 0.097 ? 0.016) banded and radio-marked individuals were not different (alpha hat [p_radioed, p_banded]=0.145 ? 0.655). Combining these data types in a common model resulted in more precise estimates of movement and recapture rates than separate estimation, but ability to detect stratum or mark-specific differences in parameters was week. We conducted simulation trials to investigate the effects of varying study designs on parameter accuracy and statistical power to detect important differences. Parameter accuracy was high (relative bias [RBIAS] <2 %) and confidence interval coverage close to nominal, except for survival estimates of banded birds for the 'off study area' stratum, which were negatively biased (RBIAS -7 to -15%) when sample sizes were small (5-10 banded or radioed animals 'released' per time interval). To provide adequate data for useful inference from this model, study designs should seek a minimum of 25 animals of each marking type observed (marked or observed via telemetry) in each time period and geographic stratum.
Annual risks of tuberculous infection in East Nusa Tenggara and Central Java Provinces, Indonesia.
Bachtiar, A; Miko, T Y; Machmud, R; Besral, B; Yudarini, P; Mehta, F; Chadha, V K; Basri, C; Loprang, F; Jitendra, R
2009-01-01
East Nusa Tenggara (NTT) and Central Java Provinces, Indonesia. To estimate the average annual risk of tuberculous infection (ARTI) among school children aged 6-9 years in each province. Children attending Classes 1-4 in 65 schools in NTT and 79 in Central Java, selected by two-stage sampling, were intradermally administered 2 tuberculin units of purified protein derivative RT23 with Tween 80 on the mid-volar aspect of the left forearm. The maximum transverse diameter of induration was measured 72 h later. The analysis was carried out among 5479 satisfactorily test-read children in NTT and 6943 in Central Java. One hundred and fifty-five new sputum smear-positive pulmonary tuberculosis (PTB) cases (78 in NTT and 77 in Central Java) were also tuberculin tested. Based on the frequency distribution of reaction sizes among the children and PTB cases, the prevalence of infection was estimated by the mirror-image method using the modes of tuberculous reactions at 15 and 17 mm. Using the 15 mm mode, ARTI was estimated at 1% in NTT and 0.9% in Central Java. Using the 17 mm mode, ARTI was estimated at 0.5% in NTT and 0.4% in Central Java. Transmission of tuberculous infection may be further reduced by intensification of tuberculosis control efforts.
Pre-main Sequence Evolution and the Hydrogen-Burning Minimum Mass
NASA Astrophysics Data System (ADS)
Nakano, Takenori
There is a lower limit to the mass of the main-sequence stars (the hydrogen-burning minimum mass) below which the stars cannot replenish the energy lost from their surfaces with the energy released by the hydrogen burning in their cores. This is caused by the electron degeneracy in the stars which suppresses the increase of the central temperature with contraction. To find out the lower limit we need the accurate knowledge of the pre-main sequence evolution of very low-mass stars in which the effect of electron degeneracy is important. We review how Hayashi and Nakano (1963) carried out the first determination of this limit.
[Economic aspects of oncological esophageal surgery : Centralization is essential].
von Dercks, N; Gockel, I; Mehdorn, M; Lorenz, D
2017-01-01
The incidence of esophageal carcinoma has increased in recent years in Germany. The aim of this article is a discussion of the economic aspects of oncological esophageal surgery within the German diagnosis-related groups (DRG) system focusing on the association between minimum caseload requirements and outcome quality as well as costs. The margins for the DRG classification G03A are low and quickly exhausted if complications determine the postoperative course. A current study using nationwide German hospital discharge data proved a significant difference in hospital mortality between clinics with and without achieving the minimum caseload requirements for esophagectomy. Data from the USA clearly showed that besides patient-relevant parameters, the caseload of a surgeon is relevant for the cost of treatment. Such cost-related analyses do not exist in Germany at present. Scientific validation of reliable minimum caseload numbers for oncological esophagectomy is desirable in the future.
Cotten, Cameron; Reed, Jennifer L
2013-01-30
Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.
2013-01-01
Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254
Are There Long-Run Effects of the Minimum Wage?
Sorkin, Isaac
2014-01-01
An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices. PMID:25937790
Are There Long-Run Effects of the Minimum Wage?
Sorkin, Isaac
2015-04-01
An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.
Characterization and impact of "dead-zone" eddies in the tropical Northeast Atlantic Ocean
NASA Astrophysics Data System (ADS)
Schuette, Florian; Karstensen, Johannes; Krahmann, Gerd; Hauss, Helena; Fiedler, Björn; Brandt, Peter; Visbeck, Martin; Körtzinger, Arne
2016-04-01
Localized open-ocean low-oxygen dead-zones in the tropical Northeast Atlantic are recently discovered ocean features that can develop in dynamically isolated water masses within cyclonic eddies (CE) and anticyclonic modewater eddies (ACME). Analysis of a comprehensive oxygen dataset obtained from gliders, moorings, research vessels and Argo floats shows that eddies with low oxygen concentrations at 50-150 m depths can be found in surprisingly high numbers and in a large area (from about 5°N to 20°N, from the shelf at the eastern boundary to 30°W). Minimum oxygen concentrations of about 9 μmol/kg in CEs and close to anoxic concentrations (< 1 μmol/kg) in ACMEs were observed. In total, 495 profiles with oxygen concentrations below the minimum background concentration of 40 μmol/kg could be associated with 27 independent "dead-zone" eddies (10 CEs; 17 ACMEs). The low oxygen concentration right beneath the mixed layer has been attributed to the combination of high productivity in the surface waters of the eddies and the isolation of the eddies' cores. Indeed eddies of both types feature a cold sea surface temperature anomaly and enhanced chlorophyll concentrations in their center. The oxygen minimum is located in the eddy core beneath the mixed layer at around 80 m depth. The mean oxygen anomaly between 50 to 150 m depth for CEs (ACMEs) is -49 (-81) μmol/kg. Eddies south of 12°N carry weak hydrographic anomalies in their cores and seem to be generated in the open ocean away from the boundary. North of 12°N, eddies of both types carry anomalously low salinity water of South Atlantic Central Water origin from the eastern boundary upwelling region into the open ocean. This points to an eddy generation near the eastern boundary. A conservative estimate yields that around 5 dead-zone eddies (4 CEs; 1 ACME) per year entering the area north of 12°N between the Cap Verde Islands and 19°W. The associated contribution to the oxygen budget of the shallow oxygen minimum zone in that area is about -10.3 (-3.0) μmol/kg/yr for CEs (ACMEs). The consumption within these eddies represents an essential part of the total consumption in the open tropical Northeast Atlantic Ocean and might be partly responsible for the formation of the shallow oxygen minimum zone.
SU-F-18C-11: Diameter Dependency of the Radial Dose Distribution in a Long Polyethylene Cylinder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakalyar, D; McKenney, S; Feng, W
Purpose: The radial dose distribution in the central plane of a long cylinder following a long CT scan depends upon the diameter and composition of the cylinder. An understanding of this behavior is required for determining the spatial average of the dose in the central plane. Polyethylene, the material for construction of the TG200/ICRU phantom (30 cm in diameter) was used for this study. Size effects are germane to the principles incorporated in size specific dose estimates (SSDE); thus diameter dependency was explored as well. Method: ssuming a uniform cylinder and cylindrically symmetric conditions of irradiation, the dose distribution canmore » be described using a radial function. This function must be an even function of the radial distance due to the conditions of symmetry. Two effects are accounted for: The direct beam makes its weakest contribution at the center while the contribution due to scatter is strongest at the center and drops off abruptly at the outer radius. An analytic function incorporating these features was fit to Monte Carlo results determined for infinite polyethylene cylinders of various diameters. A further feature of this function is that it is integrable. Results: Symmetry and continuity dictate a local extremum at the center which is a minimum for the larger sizes. The competing effects described above can Resultin an absolute maximum occurring between the center and outer edge of the cylinders. For the smallest cylinders, the maximum dose may occur at the center. Conclusion: An integrable, analytic function can be used to characterize the radial dependency of dose for cylindrical CT phantoms of various sizes. One use for this is to help determine average dose distribution over the central cylinder plane when equilibrium dose has been reached.« less
Regional frequency analysis of observed sub-daily rainfall maxima over eastern China
NASA Astrophysics Data System (ADS)
Sun, Hemin; Wang, Guojie; Li, Xiucang; Chen, Jing; Su, Buda; Jiang, Tong
2017-02-01
Based on hourly rainfall observational data from 442 stations during 1960-2014, a regional frequency analysis of the annual maxima (AM) sub-daily rainfall series (1-, 2-, 3-, 6-, 12-, and 24-h rainfall, using a moving window approach) for eastern China was conducted. Eastern China was divided into 13 homogeneous regions: Northeast (NE1, NE2), Central (C), Central North (CN1, CN2), Central East (CE1, CE2, CE3), Southeast (SE1, SE2, SE3, SE4), and Southwest (SW). The generalized extreme value performed best for the AM series in regions NE, C, CN2, CE1, CE2, SE2, and SW, and the generalized logistic distribution was appropriate in the other regions. Maximum return levels were in the SE4 region, with value ranges of 80-270 mm (1-h to 24-h rainfall) and 108-390 mm (1-h to 24-h rainfall) for 20- and 100 yr, respectively. Minimum return levels were in the CN1 and NE1 regions, with values of 37-104 mm and 53-140 mm for 20 and 100 yr, respectively. Comparing return levels using the optimal and commonly used Pearson-III distribution, the mean return-level differences in eastern China for 1-24-h rainfall varied from -3-4 mm to -23-11 mm (-10%-10%) for 20-yr events, reaching -6-26 mm (-10%-30%) and -10-133 mm (-10%-90%) for 100-yr events. In view of the large differences in estimated return levels, more attention should be given to frequency analysis of sub-daily rainfall over China, for improved water management and disaster reduction.
Estimating health state utility values for comorbid health conditions using SF-6D data.
Ara, Roberta; Brazier, John
2011-01-01
When health state utility values for comorbid health conditions are not available, data from cohorts with single conditions are used to estimate scores. The methods used can produce very different results and there is currently no consensus on which is the most appropriate approach. The objective of the current study was to compare the accuracy of five different methods within the same dataset. Data collected during five Welsh Health Surveys were subgrouped by health status. Mean short-form 6 dimension (SF-6D) scores for cohorts with a specific health condition were used to estimate mean SF-6D scores for cohorts with comorbid conditions using the additive, multiplicative, and minimum methods, the adjusted decrement estimator (ADE), and a linear regression model. The mean SF-6D for subgroups with comorbid health conditions ranged from 0.4648 to 0.6068. The linear model produced the most accurate scores for the comorbid health conditions with 88% of values accurate to within the minimum important difference for the SF-6D. The additive and minimum methods underestimated or overestimated the actual SF-6D scores respectively. The multiplicative and ADE methods both underestimated the majority of scores. However, both methods performed better when estimating scores smaller than 0.50. Although the range in actual health state utility values (HSUVs) was relatively small, our data covered the lower end of the index and the majority of previous research has involved actual HSUVs at the upper end of possible ranges. Although the linear model gave the most accurate results in our data, additional research is required to validate our findings. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Operative needs in HIV+ populations: An estimation for sub-Saharan Africa.
Cherewick, Megan L; Cherewick, Steven D; Kushner, Adam L
2017-05-01
In 2015, it was estimated that approximately 36.7 million people were living with HIV globally and approximately 25.5 million of those people were living in sub-Saharan Africa. Limitations in the availability and access to adequate operative care require policy and planning to enhance operative capacity. Data estimating the total number of persons living with HIV by country, sex, and age group were obtained from the Joint United Nations Programme on HIV/AIDS (UNAIDS) in 2015. Using minimum proposed surgical rates per 100,000 for 4, defined, sub-Saharan regions of Africa, country-specific and regional estimates were calculated. The total need and unmet need for operative procedures were estimated. A minimum of 1,539,138 operative procedures were needed in 2015 for the 25.5 million persons living with HIV in sub-Saharan Africa. In 2015, there was an unmet need of 908,513 operative cases in sub-Saharan Africa with the greatest unmet need in eastern sub-Saharan Africa (427,820) and western sub-Saharan Africa (325,026). Approximately 55.6% of the total need for operative cases is adult women, 38.4% are adult men, and 6.0% are among children under the age of 15. A minimum of 1.5 million operative procedures annually are required to meet the needs of persons living with HIV in sub-Saharan Africa. The unmet need for operative care is greatest in eastern and western sub-Saharan Africa and will require investments in personnel, infrastructure, facilities, supplies, and equipment. We highlight the need for global planning and investment in resources to meet targets of operative capacity. Copyright © 2016 Elsevier Inc. All rights reserved.
Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.
Bouhrara, Mustapha; Spencer, Richard G
2018-06-01
The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer
NASA Technical Reports Server (NTRS)
Lane, John
2012-01-01
Determining the Z-R relationship (where Z is the radar reflectivity factor and R is rainfall rate) from disdrometer data has been and is a common goal of cloud physicists and radar meteorology researchers. The usefulness of this quantity has traditionally been limited since radar represents a volume measurement, while a disdrometer corresponds to a point measurement. To solve that problem, a 3D-DSD (drop-size distribution) method of determining an equivalent 3D Z-R was developed at the University of Central Florida and tested at the Kennedy Space Center, FL. Unfortunately, that method required a minimum of three disdrometers clustered together within a microscale network (.1-km separation). Since most commercial disdrometers used by the radar meteorology/cloud physics community are high-cost instruments, three disdrometers located within a microscale area is generally not a practical strategy due to the limitations of these kinds of research budgets. A relatively simple modification to the 3D-DSD algorithm provides an estimate of the 3D-DSD and therefore, a 3D Z-R measurement using a single disdrometer. The basis of the horizontal extrapolation is mass conservation of a drop size increment, employing the mass conservation equation. For vertical extrapolation, convolution of a drop size increment using raindrop terminal velocity is used. Together, these two independent extrapolation techniques provide a complete 3DDSD estimate in a volume around and above a single disdrometer. The estimation error is lowest along a vertical plane intersecting the disdrometer position in the direction of wind advection. This work demonstrates that multiple sensors are not required for successful implementation of the 3D interpolation/extrapolation algorithm. This is a great benefit since it is seldom that multiple sensors in the required spatial arrangement are available for this type of analysis. The original software (developed at the University of Central Florida, 1998.- 2000) has also been modified to read standardized disdrometer data format (Joss-Waldvogel format). Other modifications to the software involve accounting for vertical ambient wind motion, as well as evaporation of the raindrop during its flight time.
The Arctic sea ice cover of 2016: a year of record-low highs and higher-than-expected lows
NASA Astrophysics Data System (ADS)
Petty, Alek A.; Stroeve, Julienne C.; Holland, Paul R.; Boisvert, Linette N.; Bliss, Angela C.; Kimura, Noriaki; Meier, Walter N.
2018-02-01
The Arctic sea ice cover of 2016 was highly noteworthy, as it featured record low monthly sea ice extents at the start of the year but a summer (September) extent that was higher than expected by most seasonal forecasts. Here we explore the 2016 Arctic sea ice state in terms of its monthly sea ice cover, placing this in the context of the sea ice conditions observed since 2000. We demonstrate the sensitivity of monthly Arctic sea ice extent and area estimates, in terms of their magnitude and annual rankings, to the ice concentration input data (using two widely used datasets) and to the averaging methodology used to convert concentration to extent (daily or monthly extent calculations). We use estimates of sea ice area over sea ice extent to analyse the relative "compactness" of the Arctic sea ice cover, highlighting anomalously low compactness in the summer of 2016 which contributed to the higher-than-expected September ice extent. Two cyclones that entered the Arctic Ocean during August appear to have driven this low-concentration/compactness ice cover but were not sufficient to cause more widespread melt-out and a new record-low September ice extent. We use concentration budgets to explore the regions and processes (thermodynamics/dynamics) contributing to the monthly 2016 extent/area estimates highlighting, amongst other things, rapid ice intensification across the central eastern Arctic through September. Two different products show significant early melt onset across the Arctic Ocean in 2016, including record-early melt onset in the North Atlantic sector of the Arctic. Our results also show record-late 2016 freeze-up in the central Arctic, North Atlantic and the Alaskan Arctic sector in particular, associated with strong sea surface temperature anomalies that appeared shortly after the 2016 minimum (October onwards). We explore the implications of this low summer ice compactness for seasonal forecasting, suggesting that sea ice area could be a more reliable metric to forecast in this more seasonal, "New Arctic", sea ice regime.
Magnetic fields in the torus of AGN and Mid-IR polarimetry of Cygnus A
NASA Astrophysics Data System (ADS)
Lopez-Rodriguez, E.; Packham, C.; Young, S.; Elitzur, M.; Levenson, N. A.; Mason, R. E.; Ramos Almeida, C.; Alonso-Herrero, A.; Jones, T. J.; Perlman, E.
2012-12-01
An optically and geometrically thick torus obscures the central engine of Active Galactic Nuclei (AGN) from some lines of sight. From a magnetohydrodynamical framework, the torus can be considered to be a particular region of clouds surrounding the central engine where the clouds are dusty and optically thick. In this framework, the magnetic field plays an important role in the creation, morphology and evolution of the torus. If the dust grains within the clouds are assumed to be aligned by paramagnetic alignment, then the ratio of the intrinsic polarisation and visual extinction, P(%)/Av, is a function of the magnetic field strength. To estimate the visual extinction through the torus and constrain the polarisation mechanisms in the nucleus of AGN, we developed a polarisation model to fit both the total and polarised flux in a 1.2" (˜ 263 pc) aperture of the type 2 AGN, IC5063. We consider the physical conditions and environment of the gas and dust for the torus of IC5063. Then, through paramagnetic alignment, we estimate a magnetic field strength in the range of 12 - 128 mG in the NIR emitting regions of the torus of IC5063. Alternatively, we estimate the magnetic field strength in the plane of the sky using the Chandrasekhar-Fermi method. The minimum magnetic field strength in the plane of the sky is estimated to be 13 and 41 mG depending of the conditions within the torus of IC5063. These techniques afford the chance to make a survey of AGN, to investigate the effects of magnetic field strength on the torus, accretion, and interaction to the host galaxy. We present Si2 [8.7 um] and Si5 [11.6 um] imaging polarimetry of Cygnus A using CanariCam on the 10.4-m Gran Telescopio de Canarias (GTC). Preliminary polarimetric results show a highly polarized nucleus with 11±3% and 12±3% in a 0.5" (˜500pc) aperture in Si2 and Si5, respectively. The PA of polarization remains constant, 32±8 deg, in both filters. In order to disentangle the origin of the polarized component in the nucleus of Cygnus A, further modeling using several polarizing mechanisms e.g. synchrotron, dichroic absorption/emission and/or scattering will be performed.
NASA Astrophysics Data System (ADS)
Hamburger, M. W.; Johnson, K. M.; Nowicki, M. A. E.; Bacolcol, T. C.; Solidum, R., Jr.; Galgana, G.; Hsu, Y. J.; Yu, S. B.; Rau, R. J.; McCaffrey, R.
2014-12-01
We present results of two techniques to estimate the degree of coupling along the two major subduction zone boundaries that bound the Philippine Mobile Belt, the Philippine Trench and the Manila Trench. Convergence along these plate margins accommodates about 100 mm/yr of oblique plate motion between the Philippine Sea and Sundaland plates. The coupling estimates are based on a recently acquired set of geodetic data from a dense nationwide network of continuous and campaign GPS sites in the Philippines. First, we use a kinematic, elastic block model (tdefnode; McCaffrey, 2009) that combines existing fault geometries, GPS velocities and focal mechanism solutions to solve for block rotations, fault coupling, and intra-block deformation. Secondly, we use a plate-block kinematic model described in Johnson (2013) to simultaneously estimate long-term fault slip rates, block motions and interseismic coupling on block-bounding faults. The best-fit model represents the Philippine Mobile Belt by 14 independently moving rigid tectonic blocks, separated by active faults and subduction zones. The model predicts rapid convergence along the Manila Trench, decreasing progressively southwards, from > 100 mm/yr in the north to less than 20 mm/yr in the south at the Mindoro Island collision zone. Persistent areas of high coupling, interpreted to be asperities, are observed along the Manila Trench slab interface, in central Luzon (16-18°N) and near its southern and northern terminations. Along the Philippine Trench, we observe ~50 mm/yr of oblique convergence, with high coupling observed at its central and southern segments. We identify the range of allowable coupling distributions and corresponding moment accumulation rates on the two subduction zones by conducting a suite of inversions in which the total moment accumulation rate on a selected fault is fixed. In these constrained moment inversions we test the range of possible solutions that meet criteria for minimum, best-fit, and maximum coupling that still fit the data, based on reduced chi-squared calculations. In spite of the variable coupling, the total potential moment accumulation rate along each of the two subduction zones is estimated to range from 3.98 x 1019 to 2.24 x 1020 N-m yr-1, equivalent to a magnitude Mw 8.4 to 8.9 earthquake per 100 years.
Weiss, Wolfgang; Gohlisch, Christopher; Harsch-Gladisch, Christl; Tölle, Markus; Zidek, Walter; van der Giet, Markus
2012-06-01
Hypertension is a major risk factor for a wide range of cardiovascular diseases and is typically identified by measuring blood pressure (BP) at the brachial artery. Although such a measurement may accurately determine diastolic BP, systolic BP is not reflected accurately. Current noninvasive techniques for assessing central aortic BP require additional recording of an arterial pressure wave using a high-fidelity applanation tonometer. Within one measurement cycle, the Mobil-O-Graph BP device uses brachial oscillometric BP waves for a noninvasive estimation of central BP. We therefore validated the Mobil-O-Graph against the SphygmoCor device, which is widely known as the commonly used approach for a noninvasive estimation of central BP. For each individual, we compared three readings of the central BP values obtained by the Mobil-O-Graph and SphygmoCor device consecutively. One hundred individuals (mean age 56.1 ± 15.4 years) were recruited for measurement.Differences between the central BP values of the test device and the SphygmoCor device were calculated for each measurement. The mean difference (95% confidence interval) for the estimated central systolic BP between both devices was -0.6 ± 3.7 mmHg. Comparison of the central BP values measured by the two devices showed a statistically significant linear correlation (R=0.91, P<0.0001). The mean between-method difference was 0.50 mmHg for central systolic BP estimation. The intrarater reproducibility between both the devices was also comparable. Bland and Altman analyses showed that the mean differences (95% confidence interval) between repeated measurements were 1.89 (0.42-3.36) mmHg and 1.36 (-0.16 to 2.83) mmHg for the SphygmoCor and the Mobil-O-Graph device, respectively. Thus, neither of these differences was statistically significantly different from 0. The limits of agreement were -16.34 to 19.73 and -15.23 to 17.17 mmHg for the SphygmoCor and the Mobil-O-Graph device, respectively. Oscillometric noninvasive estimation of central BP with the Mobil-O-Graph BP device is as effective as using the well-established SphygmoCor applanation tonometry device. In comparison, the Mobil-O-Graph combines the widespread benefits of brachial BP measurement and also provides central BP within one measurement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letschert, Virginie E.; Bojda, Nicholas; Ke, Jing
2012-07-01
This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programsmore » while still saving consumers money?« less
UNCOVERING THE INTRINSIC VARIABILITY OF GAMMA-RAY BURSTS
NASA Astrophysics Data System (ADS)
Golkhou, V. Zach; Butler, Nathaniel R
2014-08-01
We develop a robust technique to determine the minimum variability timescale for gamma-ray burst (GRB) light curves, utilizing Haar wavelets. Our approach averages over the data for a given GRB, providing an aggregate measure of signal variation while also retaining sensitivity to narrow pulses within complicated time series. In contrast to previous studies using wavelets, which simply define the minimum timescale in reference to the measurement noise floor, our approach identifies the signature of temporally smooth features in the wavelet scaleogram and then additionally identifies a break in the scaleogram on longer timescales as a signature of a true, temporally unsmooth light curve feature or features. We apply our technique to the large sample of Swift GRB gamma-ray light curves and for the first time—due to the presence of a large number of GRBs with measured redshift—determine the distribution of minimum variability timescales in the source frame. We find a median minimum timescale for long-duration GRBs in the source frame of Δtmin = 0.5 s, with the shortest timescale found being on the order of 10 ms. This short timescale suggests a compact central engine (3000 km). We discuss further implications for the GRB fireball model and present a tantalizing correlation between the minimum timescale and redshift, which may in part be due to cosmological time dilation.
({The) Solar System Large Planets influence on a new Maunder Miniμm}
NASA Astrophysics Data System (ADS)
Yndestad, Harald; Solheim, Jan-Erik
2016-04-01
In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.
NASA Astrophysics Data System (ADS)
Ma, Y.; Xu, W.; Zhao, X.; Qin, L.
2016-12-01
Accurate location and allocation of earthquake emergency shelters is a key component of effective urban planning and emergency management. A number of models have been developed to solve the complex location-allocation problem with diverse and strict constraints, but there still remain a big gap between the model and the actual situation because the uncertainty of earthquake, damage rate of buildings and evacuee behaviors have been neglected or excessively simplified in the existing models. An innovative model was first developed to estimate the hourly dynamic changes of the number of evacuees under two damage scenarios of earthquake by considering these factors at the community level based on a location-based service data, and then followed by a multi-objective model for the allocation of residents to earthquake shelters using the central area of Beijing, China as a case study. The two objectives of this shelter allocation model were to minimize the total evacuation distance from communities to a specified shelter and to minimize the total area of all the shelters with the constraints of shelter capacity and service radius. The modified particle swarm optimization algorithm was used to solve this model. The results show that increasing the shelter area will result in a large decrease of the total evacuation distance in all of the schemes of the four scenarios (i.e., Scenario A and B in daytime and nighttime respectively). According to the schemes of minimum distance, parts of communities in downtown area needed to be reallocated due to the insufficient capacity of the nearest shelters, and the numbers of these communities sequentially decreased in scenarios Ad, An, Bd and Bn due to the decreasing population. According to the schemes of minimum area in each scenario, 27 or 28 shelters, covering a total area of approximately 37 km2, were selected; and the communities almost evacuated using the same routes in different scenarios. The results can be used as a scientific reference for the planning of shelters in Beijing.
Estimated minimum savings to the medicaid budget in Florida by implementing a primary seat belt law
DOT National Transportation Integrated Search
2007-03-01
A 2003 study estimated that if all States had primary laws from 1995 to 2002, over 12,000 lives would have been saved. Failure to implement a primary belt law creates a real cost to a States budget for Medicaid and other State medical expenditures...
Estimated minimum savings to the Medicaid budget in Arkansas by implementing a primary seat belt law
DOT National Transportation Integrated Search
2007-03-01
A 2003 study estimated that if all States had primary laws from 1995 to 2002, over 12,000 lives would have been saved. Failure to implement a primary belt law creates a real cost to a States budget for Medicaid and other State medical expenditures...
7 CFR 1781.17 - Docket preparation and processing.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...
7 CFR 1781.17 - Docket preparation and processing.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...
7 CFR 1781.17 - Docket preparation and processing.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...
7 CFR 1781.17 - Docket preparation and processing.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...
7 CFR 1781.17 - Docket preparation and processing.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...
Testing for Seed Quality in Southern Oaks
F.T. Bonner
1984-01-01
Expressions of germination rate, such as peak value (PV) or mean germination time (MGT), provide good estimates of acorn quality, but test completion requires a minimum of 3 weeks. For more rapid estimates, tetrazolium staining is recommended. Some seed test results were significantly correlated with nursery germination of cherrybark and water oaks, but not with...
Estimated minimum savings to the Medicaid budget in Missouri by implementing a primary seat belt law
DOT National Transportation Integrated Search
2007-03-01
A 2003 study estimated that if all States had primary laws from 1995 to 2002, over 12,000 lives would have been saved. Failure to implement a primary belt law creates a real cost to a States budget for Medicaid and other State medical expenditures...
Mars surface radiation exposure for solar maximum conditions and 1989 solar proton events
NASA Technical Reports Server (NTRS)
Simonsen, Lisa C.; Nealy, John E.
1992-01-01
The Langley heavy-ion/nucleon transport code, HZETRN, and the high-energy nucleon transport code, BRYNTRN, are used to predict the propagation of galactic cosmic rays (GCR's) and solar flare protons through the carbon dioxide atmosphere of Mars. Particle fluences and the resulting doses are estimated on the surface of Mars for GCR's during solar maximum conditions and the Aug., Sep., and Oct. 1989 solar proton events. These results extend previously calculated surface estimates for GCR's at solar minimum conditions and the Feb. 1956, Nov. 1960, and Aug. 1972 solar proton events. Surface doses are estimated with both a low-density and a high-density carbon dioxide model of the atmosphere for altitudes of 0, 4, 8, and 12 km above the surface. A solar modulation function is incorporated to estimate the GCR dose variation between solar minimum and maximum conditions over the 11-year solar cycle. By using current Mars mission scenarios, doses to the skin, eye, and blood-forming organs are predicted for short- and long-duration stay times on the Martian surface throughout the solar cycle.
NASA Astrophysics Data System (ADS)
Park, Sang-Gon; Jeong, Dong-Seok
2000-12-01
In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.
NASA Astrophysics Data System (ADS)
Wu, Ming; Cheng, Zhou; Wu, Jianfeng; Wu, Jichun
2017-06-01
Representative elementary volume (REV) is important to determine properties of porous media and those involved in migration of contaminants especially dense nonaqueous phase liquids (DNAPLs) in subsurface environment. In this study, an experiment of long-term migration of the commonly used DNAPL, perchloroethylene (PCE), is performed in a two dimensional (2D) sandbox where several system variables including porosity, PCE saturation (Soil) and PCE-water interfacial area (AOW) are accurately quantified by light transmission techniques over the entire PCE migration process. Moreover, the REVs for these system variables are estimated by a criterion of relative gradient error (εgi) and results indicate that the frequency of minimum porosity-REV size closely follows a Gaussian distribution in the range of 2.0 mm and 8.0 mm. As experiment proceeds in PCE infiltration process, the frequency and cumulative frequency of both minimum Soil-REV and minimum AOW-REV sizes change their shapes from the irregular and random to the regular and smooth. When experiment comes into redistribution process, the cumulative frequency of minimum Soil-REV size reveals a linear positive correlation, while frequency of minimum AOW-REV size tends to a Gaussian distribution in the range of 2.0 mm-7.0 mm and appears a peak value in 13.0 mm-14.0 mm. Undoubtedly, this study will facilitate the quantification of REVs for materials and fluid properties in a rapid, handy and economical manner, which helps enhance our understanding of porous media and DNAPL properties at micro scale, as well as the accuracy of DNAPL contamination modeling at field-scale.
Unifying Research and Teaching: Pedagogy for the Transition from Forensics Competition to Education.
ERIC Educational Resources Information Center
Jensen, Scott
At a minimum, tomorrow's forensic educators need formal training that orients the professional to the responsibilities central to forensic education. While a number of opportunities, as well as applications of those opportunities, are available to forensics students, the rooting of forensics in the speech communication discipline is paramount.…
Public/Private in Higher Education: A Synthesis of Economic and Political Approaches
ERIC Educational Resources Information Center
Marginson, Simon
2018-01-01
The public/private distinction is central to higher education but there is no consensus on "public." In neo-classical economic theory, Samuelson distinguishes non-market goods (public) that cannot be produced for profit, from market-based activity (private). This provides a basis for identifying the minimum necessary public expenditure,…
Coal Gasification Processes for Retrofitting Military Central Heating Plants: Overview
1992-11-01
the water runoff has minimum contamination. The coal pile is located on a waterproof base to prevent water seepage into the ground. All runoff water...United Arab Naphtha Republic Chemical Fertili - Lignite Dust 1 217,000 Ammonia 1963 zer Company Ltd. Synthesis of Thailand, Ferti- lizer Plant in Mae Moh
An adaptive technique for estimating the atmospheric density profile during the AE mission
NASA Technical Reports Server (NTRS)
Argentiero, P.
1973-01-01
A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.
Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less
NASA Technical Reports Server (NTRS)
Britt, V. O.
1993-01-01
An approximate analysis for buckling of biaxial- and shear-loaded anisotropic panels with centrally located elliptical cutouts is presented in the present paper. The analysis is composed of two parts, a prebuckling analysis and a buckling analysis. The prebuckling solution is determined using Lekhnitskii's complex variable equations of plane elastostatics combined with a Laurent series approximation and a boundary collocation method. The buckling solution is obtained using the principle of minimum potential energy. A by-product of the minimum potential energy equation is an integral equation which is solved using Gaussian quadrature. Comparisons with documented experimental results and finite element analyses indicate that the approximate analysis accurately predicts the buckling loads of square biaxial- and shear-loaded panels having elliptical cutouts with major axes up to sixty percent of the panel width. Results of a parametric study are presented for shear- and compression-loaded rectangular anisotropic panels with elliptical cutouts. The effects of panel aspect ratio, cutout shape, cutout size, cutout orientation, laminate anisotropy, and combined loading on the buckling load are examined.
NASA Astrophysics Data System (ADS)
Igono, M. O.; Bjotvedt, G.; Sanford-Crane, H. T.
1992-06-01
The environmental profile of central Arizona is quantitatively described using meteorological data between 1971 and 1986. Utilizing ambient temperature criteria of hours per day less than 21° C, between 21 and 27° C, and more than 27° C, the environmental profile of central Arizona consists of varying levels of thermoneutral and heat stress periods. Milk production data from two commercial dairy farms from March 1990 to February 1991 were used to evaluate the seasonal effects identified in the environmental profile. Overall, milk production is lower during heat stress compared to thermoneutral periods. During heat stress, the cool period of hours per day with temperature less than 21° C provides a margin of safety to reduce the effects of heat stress on decreased milk production. Using minimum, mean and maximum ambient temperatures, the upper critical temperatures for milk production are 21, 27 and 32° C, respectively. Using the temperature-humidity index as the thermal environment indicator, the critical values for minimum, mean and maximum THI are 64, 72 and 76, respectively.
Transit timing variations for planets co-orbiting in the horseshoe regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vokrouhlický, David; Nesvorný, David, E-mail: vokrouhl@cesnet.cz, E-mail: davidn@boulder.swri.edu
2014-08-10
Although not yet detected, pairs of exoplanets in 1:1 mean motion resonance probably exist. Low eccentricity, near-planar orbits, which in the comoving frame follow horseshoe trajectories, are one of the possible stable configurations. Here we study transit timing variations (TTVs) produced by mutual gravitational interaction of planets in this orbital architecture, with the goal to develop methods that can be used to recognize this case in observational data. In particular, we use a semi-analytic model to derive parametric constraints that should facilitate data analysis. We show that characteristic traits of the TTVs can directly constrain the (1) ratio of planetarymore » masses and (2) their total mass (divided by that of the central star) as a function of the minimum angular separation as seen from the star. In an ideal case, when transits of both planets are observed and well characterized, the minimum angular separation can also be inferred from the data. As a result, parameters derived from the observed transit timing series alone can directly provide both planetary masses scaled to the central star mass.« less
Many-to-Many Multicast Routing Schemes under a Fixed Topology
Ding, Wei; Wang, Hongfa; Wei, Xuerui
2013-01-01
Many-to-many multicast routing can be extensively applied in computer or communication networks supporting various continuous multimedia applications. The paper focuses on the case where all users share a common communication channel while each user is both a sender and a receiver of messages in multicasting as well as an end user. In this case, the multicast tree appears as a terminal Steiner tree (TeST). The problem of finding a TeST with a quality-of-service (QoS) optimization is frequently NP-hard. However, we discover that it is a good idea to find a many-to-many multicast tree with QoS optimization under a fixed topology. In this paper, we are concerned with three kinds of QoS optimization objectives of multicast tree, that is, the minimum cost, minimum diameter, and maximum reliability. All of three optimization problems are distributed into two types, the centralized and decentralized version. This paper uses the dynamic programming method to devise an exact algorithm, respectively, for the centralized and decentralized versions of each optimization problem. PMID:23589706
NASA Astrophysics Data System (ADS)
Loperfido, J. V.; Noe, Gregory B.; Jarnagin, S. Taylor; Hogan, Dianna M.
2014-11-01
Urban stormwater runoff remains an important issue that causes local and regional-scale water quantity and quality issues. Stormwater best management practices (BMPs) have been widely used to mitigate runoff issues, traditionally in a centralized manner; however, problems associated with urban hydrology have remained. An emerging trend is implementation of BMPs in a distributed manner (multi-BMP treatment trains located on the landscape and integrated with urban design), but little catchment-scale performance of these systems have been reported to date. Here, stream hydrologic data (March, 2011-September, 2012) are evaluated in four catchments located in the Chesapeake Bay watershed: one utilizing distributed stormwater BMPs, two utilizing centralized stormwater BMPs, and a forested catchment serving as a reference. Among urban catchments with similar land cover, geology and BMP design standards (i.e. 100-year event), but contrasting placement of stormwater BMPs, distributed BMPs resulted in: significantly greater estimated baseflow, a higher minimum precipitation threshold for stream response and maximum discharge increases, better maximum discharge control for small precipitation events, and reduced runoff volume during an extreme (1000-year) precipitation event compared to centralized BMPs. For all catchments, greater forest land cover and less impervious cover appeared to be more important drivers than stormwater BMP spatial pattern, and caused lower total, stormflow, and baseflow runoff volume; lower maximum discharge during typical precipitation events; and lower runoff volume during an extreme precipitation event. Analysis of hydrologic field data in this study suggests that both the spatial distribution of stormwater BMPs and land cover are important for management of urban stormwater runoff. In particular, catchment-wide application of distributed BMPs improved stream hydrology compared to centralized BMPs, but not enough to fully replicate forested catchment stream hydrology. Integrated planning of stormwater management, protected riparian buffers and forest land cover with suburban development in the distributed-BMP catchment enabled multi-purpose use of land that provided esthetic value and green-space, community gathering points, and wildlife habitat in addition to hydrologic stormwater treatment.
Loperfido, John V.; Noe, Gregory B.; Jarnagin, S. Taylor; Hogan, Dianna M.
2014-01-01
Urban stormwater runoff remains an important issue that causes local and regional-scale water quantity and quality issues. Stormwater best management practices (BMPs) have been widely used to mitigate runoff issues, traditionally in a centralized manner; however, problems associated with urban hydrology have remained. An emerging trend is implementation of BMPs in a distributed manner (multi-BMP treatment trains located on the landscape and integrated with urban design), but little catchment-scale performance of these systems have been reported to date. Here, stream hydrologic data (March, 2011–September, 2012) are evaluated in four catchments located in the Chesapeake Bay watershed: one utilizing distributed stormwater BMPs, two utilizing centralized stormwater BMPs, and a forested catchment serving as a reference. Among urban catchments with similar land cover, geology and BMP design standards (i.e. 100-year event), but contrasting placement of stormwater BMPs, distributed BMPs resulted in: significantly greater estimated baseflow, a higher minimum precipitation threshold for stream response and maximum discharge increases, better maximum discharge control for small precipitation events, and reduced runoff volume during an extreme (1000-year) precipitation event compared to centralized BMPs. For all catchments, greater forest land cover and less impervious cover appeared to be more important drivers than stormwater BMP spatial pattern, and caused lower total, stormflow, and baseflow runoff volume; lower maximum discharge during typical precipitation events; and lower runoff volume during an extreme precipitation event. Analysis of hydrologic field data in this study suggests that both the spatial distribution of stormwater BMPs and land cover are important for management of urban stormwater runoff. In particular, catchment-wide application of distributed BMPs improved stream hydrology compared to centralized BMPs, but not enough to fully replicate forested catchment stream hydrology. Integrated planning of stormwater management, protected riparian buffers and forest land cover with suburban development in the distributed-BMP catchment enabled multi-purpose use of land that provided esthetic value and green-space, community gathering points, and wildlife habitat in addition to hydrologic stormwater treatment.
Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter
2014-12-01
Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.
Defining Incident Cases of Epilepsy in Administrative Data
Bakaki, Paul M.; Koroukian, Siran M.; Jackson, Leila W.; Albert, Jeffrey M.; Kaiboriboon, Kitti
2013-01-01
Purpose To determine the minimum enrollment duration for identifying incident cases of epilepsy in administrative data. Methods We performed a retrospective dynamic cohort study using Ohio Medicaid data from 1992–2006 to identify a total of 5,037 incident epilepsy cases who had at least 1 year of follow-up prior to epilepsy diagnosis (epilepsy-free interval). The incidence for epilepsy-free intervals from 1 to 8 years, overall and stratified by pre-existing disability status, was examined. The graphical approach between the slopes of incidence estimates and the epilepsy-free intervals was used to identify the minimum epilepsy-free interval that minimized misclassification of prevalent as incident epilepsy cases. Results As the length of epilepsy-free interval increased, the incidence rates decreased. A graphical plot showed that the decline in incidence of epilepsy became nearly flat beyond the third epilepsy-free interval. Conclusion The minimum of 3-year epilepsy-free interval is needed to differentiate incident from prevalent cases in administrative data. Shorter or longer epilepsy-free intervals could result in over- or under-estimation of epilepsy incidence. PMID:23791310
NASA Astrophysics Data System (ADS)
Ilieva, T.; Iliev, I.; Pashov, A.
2016-12-01
In the traditional description of electronic states of diatomic molecules by means of molecular constants or Dunham coefficients, one of the important fitting parameters is the value of the zero point energy - the minimum of the potential curve or the energy of the lowest vibrational-rotational level - E00 . Their values are almost always the result of an extrapolation and it may be difficult to estimate their uncertainties, because they are connected not only with the uncertainty of the experimental data, but also with the distribution of experimentally observed energy levels and the particular realization of set of Dunham coefficients. This paper presents a comprehensive analysis based on Monte Carlo simulations, which aims to demonstrate the influence of all these factors on the uncertainty of the extrapolated minimum of the potential energy curve U (Re) and the value of E00 . The very good extrapolation properties of the Dunham coefficients are quantitatively confirmed and it is shown that for a proper estimate of the uncertainties, the ambiguity in the composition of the Dunham coefficients should be taken into account.
Galactic Cosmic Ray Intensity in the Upcoming Minimum of the Solar Activity Cycle
NASA Astrophysics Data System (ADS)
Krainev, M. B.; Bazilevskaya, G. A.; Kalinin, M. S.; Svirzhevskaya, A. K.; Svirzhevskii, N. S.
2018-03-01
During the prolonged and deep minimum of solar activity between cycles 23 and 24, an unusual behavior of the heliospheric characteristics and increased intensity of galactic cosmic rays (GCRs) near the Earth's orbit were observed. The maximum of the current solar cycle 24 is lower than the previous one, and the decline in solar and, therefore, heliospheric activity is expected to continue in the next cycle. In these conditions, it is important for an understanding of the process of GCR modulation in the heliosphere, as well as for applied purposes (evaluation of the radiation safety of planned space flights, etc.), to estimate quantitatively the possible GCR characteristics near the Earth in the upcoming solar minimum ( 2019-2020). Our estimation is based on the prediction of the heliospheric characteristics that are important for cosmic ray modulation, as well as on numeric calculations of GCR intensity. Additionally, we consider the distribution of the intensity and other GCR characteristics in the heliosphere and discuss the intercycle variations in the GCR characteristics that are integral for the whole heliosphere (total energy, mean energy, and charge).
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
Estimating past diameters of mixed conifer species in the central Sierra Nevada
K. Leroy Dolph
1981-01-01
Tree diameter outside bark at an earlier period of growth can be estimated from the linear relationship of present inside bark and outside bark diameters at breast height. This note presents equations for estimating inside bark diameters, outside bark diameters, and past outside bark diameters for each of the mixed-conifer species in the central Sierra Nevada.
Cost estimators for construction of forest roads in the central Appalachians
Deborah, A. Layton; Chris O. LeDoux; Curt C. Hassler; Curt C. Hassler
1992-01-01
Regression equations were developed for estimating the total cost of road construction in the central Appalachian region. Estimators include methods for predicting total costs for roads constructed using hourly rental methods and roads built on a total-job bid basis. Results show that total-job bid roads cost up to five times as much as roads built than when equipment...
Eigenvector of gravity gradient tensor for estimating fault dips considering fault type
NASA Astrophysics Data System (ADS)
Kusumoto, Shigekazu
2017-12-01
The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.
Brouwers, E.M.; Jorgensen, N.O.; Cronin, T. M.
1991-01-01
The Kap Kobenhavn Formation crops out in Greenland at 80??N latitude and marks the most northerly onshore Pliocene locality known. The sands and silts that comprise the formation were deposited in marginal marine and shallow marine environments. An abundant and diverse vertebrate and invertebrate fauna and plant megafossil flora provide age and paleoclimatic constraints. The age estimated for the Kap Kobenhavn ranges from 2.0 to 3.0 million years old. Winter and summer bottom water paleotemperatures were estimated on the basis of the ostracode assemblages. The marine ostracode fauna in units B1 and B2 indicate a subfrigid to frigid marine climate, with estimated minimum sea bottom temperatures (SBT) of -2??C and estimated maximum SBT of 6-8??C. Sediments assigned to unit B2 at locality 72 contain a higher proportion of warm water genera, and the maximum SBT is estimated at 9-10??C. The marginal marine fauna in the uppermost unit B3 (locality 68) indicates a cold temperate to subfrigid marine climate, with an estimated minimum SBT of -2??C and an estimated maximum SBT ranging as high as 12-14??C. These temperatures indicated that, on the average, the Kap Kobenhavn winters in the late Pliocene were similar to or perhaps 1-2??C warmer than winters today and that summer temperatures were 7-8??C warmer than today. -from Authors
Multiple-rule bias in the comparison of classification rules
Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.
2011-01-01
Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390
NASA Astrophysics Data System (ADS)
He, Minhui; Yang, Bao; Datsenko, Nina M.
2014-08-01
The recent unprecedented warming found in different regions has aroused much attention in the past years. How temperature has really changed on the Tibetan Plateau (TP) remains unknown since very limited high-resolution temperature series can be found over this region, where large areas of snow and ice exist. Herein, we develop two Juniperus tibetica Kom. tree-ring width chronologies from different elevations. We found that the two tree-ring series only share high-frequency variability. Correlation, response function and partial correlation analysis indicate that prior year annual (January-December) minimum temperature is most responsible for the higher belt juniper radial growth, while more or less precipitation signal is contained by the tree-ring width chronology at the lower belt and is thus excluded from further analysis. The tree growth-climate model accounted for 40 % of the total variance in actual temperature during the common period 1957-2010. The detected temperature signal is further robustly verified by other results. Consequently, a six century long annual minimum temperature history was firstly recovered for the Yushu region, central TP. Interestingly, the rapid warming trend during the past five decades is identified as a significant cold phase in the context of the past 600 years. The recovered temperature series reflects low-frequency variability consistent with other temperature reconstructions over the whole TP region. Furthermore, the present recovered temperature series is associated with the Asian monsoon strength on decadal to multidecadal scales over the past 600 years.
NASA Astrophysics Data System (ADS)
Panagoulia, Dionysia; Vlahogianni, Eleni I.
2018-06-01
A methodological framework based on nonlinear recurrence analysis is proposed to examine the historical data evolution of extremes of maximum and minimum daily mean areal temperature patterns over time under different climate scenarios. The methodology is based on both historical data and atmospheric General Circulation Model (GCM) produced climate scenarios for the periods 1961-2000 and 2061-2100 which correspond to 1 × CO2 and 2 × CO2 scenarios. Historical data were derived from the actual daily observations coupled with atmospheric circulation patterns (CPs). The dynamics of the temperature was reconstructed in the phase-space from the time series of temperatures. The statistically comparing different temperature patterns were based on some discriminating statistics obtained by the Recurrence Quantification Analysis (RQA). Moreover, the bootstrap method of Schinkel et al. (2009) was adopted to calculate the confidence bounds of RQA parameters based on a structural preserving resampling. The overall methodology was implemented to the mountainous Mesochora catchment in Central-Western Greece. The results reveal substantial similarities between the historical maximum and minimum daily mean areal temperature statistical patterns and their confidence bounds, as well as the maximum and minimum temperature patterns in evolution under the 2 × CO2 scenario. A significant variability and non-stationary behaviour characterizes all climate series analyzed. Fundamental differences are produced from the historical and maximum 1 × CO2 scenarios, the maximum 1 × CO2 and minimum 1 × CO2 scenarios, as well as the confidence bounds for the two CO2 scenarios. The 2 × CO2 scenario reflects the strongest shifts in intensity, duration and frequency in temperature patterns. Such transitions can help the scientists and policy makers to understand the effects of extreme temperature changes on water resources, economic development, and health of ecosystems and hence to proceed to effective proactive management of extreme phenomena. The impacts of the findings on the predictability of the extreme daily mean areal temperature patterns are also commented.
Origin of the Central Constant Emission Component of Eta Carinae
NASA Technical Reports Server (NTRS)
Hamaguchi, Kenji; Corocoran, M. F.; Gull, T.; Ishibashi, K.; Pittard, J. M.; Hillier, D. J.; Damineli, A.; Davidson, K.; Nielsen, K. E.; Owocki, S.;
2010-01-01
The X-ray campaign observation of the wind-wind colliding (WWC) binary system, Eta Carinae, targeted at its periastron passage in 2003, presented a detailed view of the flux and spectral variations of the X-ray minimum phase. One of the discoveries in this campaign was a central constant emission (CCE) component very near the central WWC source (Hamaguchi et al. 2007, ApJ, 663, 522). The CCE component was noticed between 1-3 keY during the X-ray minima and showed no variation on either short timescales within any observation or long timescales of up to 10 years. Hamaguchi et al. (2007) discussed possible origins as collisionally heated shocks from the fast polar winds from Eta Car or the fast moving outflow from the WWC with the ambient gas, or shocked gas that is intrinsic to the wind of Eta Car. During the 2009 periastron passage, we launched another focussed observing campaign of Eta Carinae with the Chandra, XMM-Newton and Suzaku observatories, concentrating on the X-ray faintest phase named the deep X-ray minimum. Thanks to multiple observations during the deep X-ray minimum, we found that the CCE spectrum extended up to 10 keV, indicating presence of hot plasma of kT approx.4-6 keV. This result excludes two possible origins that assume relatively slow winds (v approx. 1000 km/s) and only leaves the possibility that the CCE plasma is wind blown bubble at the WWC downstream. The CCE spectrum in 2009 showed a factor of 2 higher soft band flux as the CCE spectrum in 2003, while the hard band flux was almost unchanged. This variation suggests decrease in absorption column along the line of sight. We compare this result with recent increase in V-band magnitude of Eta Carinae and discuss location of the CCE plasma.
Karczmarski, Leszek; Huang, Shiang-Lin; Chan, Stephen C Y
2017-02-23
Defining demographic and ecological threshold of population persistence can assist in informing conservation management. We undertook such analyses for the Indo-Pacific humpback dolphin (Sousa chinensis) in the Pearl River Delta (PRD) region, southeast China. We use adult survival estimates for assessments of population status and annual rate of change. Our estimates indicate that, given a stationary population structure and minimal risk scenario, ~2000 individuals (minimum viable population in carrying capacity, MVP k ) can maintain the population persistence across 40 generations. However, under the current population trend (~2.5% decline/annum), the population is fast approaching its viability threshold and may soon face effects of demographic stochasticity. The population demographic trajectory and the minimum area of critical habitat (MACH) that could prevent stochastic extinction are both highly sensitive to fluctuations in adult survival. For a hypothetical stationary population, MACH should approximate 3000-km 2 . However, this estimate increases four-fold with a 5% increase of adult mortality and exceeds the size of PRD when calculated for the current population status. On the other hand, cumulatively all current MPAs within PRD fail to secure the minimum habitat requirement to accommodate sufficiently viable population size. Our findings indicate that the PRD population is deemed to become extinct unless effective conservation measures can rapidly reverse the current population trend.
Using multiplicity as a fractional cross-section estimation for centrality in PHOBOS
NASA Astrophysics Data System (ADS)
Hollis, Richard S.; Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Busza, W.; Carroll, A.; Chai, Z.; Decowski, M. P.; García, E.; Gburek, T.; George, N.; Gulbrandsen, K.; Halliwell, C.; Hamblen, J.; Hauer, M.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Holylnski, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Khan, N.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Reed, C.; Roland, C.; Roland, G.; Sagerer, J.; Seals, H.; Sedykh, I.; Smith, C. E.; Stankiewicz, M. A.; Steinberg, P.; Stephans, G. S. F.; Sukhanov, A.; Tonjes, M. B.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Vaurynovich, S. S.; Verdier, R.; Veres, G. I.; Wenger, E.; Wolfs, F. L. H.; Wosiek, B.; Wozniak, K.; Wyslouch, B.; PHOBOS Collaboration
2005-01-01
Collision centrality is a valuable parameter used in relativistic nuclear physics which relates to geometrical quantities such as the number of participating nucleons. PHOBOS utilizes a multiplicity measurement as a means to estimate fractional cross-section of a collision event-by-event. From this, the centrality of this collision can be deduced. The details of the centrality determination depend both on the collision system and collision energy. Presented here are the techniques developed over the course of the RHIC program that are used by PHOBOS to extract the centrality. Possible biases that have to be overcome before a final measurement can be interpreted are discussed.
2013-09-30
the Study of the Environmental Arctic Change (SEARCH) Sea Ice Outlook (SIO) effort. The SIO is an international effort to provide a community-wide...summary of the expected September arctic sea ice minimum. Monthly reports released throughout the summer synthesize community estimates of the current...state and expected minimum of sea ice . Along with the backbone components of this system (NAVGEM/HYCOM/CICE), other data models have been used to
1982-08-01
DATA NUMBER OF POINTS 1988 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 PS3 -218.12 294.77 3 T3 -341.54 738.15 4 T5 -464.78 623.47 5 PT51 12.317...Continued) CRUISE AND TAKE-OFF MODE DATA I NUMBER OF POINTS 4137 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 P53 -218.12 376.60 3 T3 -482.72
A 22,000-year record of monsoonal precipitation from northern Chile's Atacama Desert
Betancourt, J.L.; Latorre, C.; Rech, J.A.; Quade, Jay; Rylander, K.A.
2000-01-01
Fossil rodent middens and wetland deposits from the central Atacama Desert (22° to 24°S) indicate increasing summer precipitation, grass cover, and groundwater levels from 16.2 to 10.5 calendar kiloyears before present (ky B.P.). Higher elevation shrubs and summer-flowering grasses expanded downslope across what is now the edge of Absolute Desert, a broad expanse now largely devoid of rainfall and vegetation. Paradoxically, this pluvial period coincided with the summer insolation minimum and reduced adiabatic heating over the central Andes. Summer precipitation over the central Andes and central Atacama may depend on remote teleconnections between seasonal insolation forcing in both hemispheres, the Asian monsoon, and Pacific sea surface temperature gradients. A less pronounced episode of higher groundwater levels in the central Atacama from 8 to 3 ky B.P. conflicts with an extreme lowstand of Lake Titicaca, indicating either different climatic forcing or different response times and sensitivities to climatic change.
A 22,000-Year Record of Monsoonal Precipitation from Northern Chile's Atacama Desert.
Betancourt; Latorre; Rech; Quade; Rylander
2000-09-01
Fossil rodent middens and wetland deposits from the central Atacama Desert (22 degrees to 24 degrees S) indicate increasing summer precipitation, grass cover, and groundwater levels from 16.2 to 10.5 calendar kiloyears before present (ky B.P.). Higher elevation shrubs and summer-flowering grasses expanded downslope across what is now the edge of Absolute Desert, a broad expanse now largely devoid of rainfall and vegetation. Paradoxically, this pluvial period coincided with the summer insolation minimum and reduced adiabatic heating over the central Andes. Summer precipitation over the central Andes and central Atacama may depend on remote teleconnections between seasonal insolation forcing in both hemispheres, the Asian monsoon, and Pacific sea surface temperature gradients. A less pronounced episode of higher groundwater levels in the central Atacama from 8 to 3 ky B.P. conflicts with an extreme lowstand of Lake Titicaca, indicating either different climatic forcing or different response times and sensitivities to climatic change.
NASA Astrophysics Data System (ADS)
Ye, Jun; Xu, Jiangming; Song, Jiaxin; Wu, Hanshuo; Zhang, Hanwei; Wu, Jian; Zhou, Pu
2018-06-01
Through high-fidelity numerical modeling and careful system-parameter design, we demonstrate the spectral manipulation of a hundred-watt-level high-power random fiber laser (RFL) by employing a watt-level tunable optical filter. Consequently, a >100-W RFL with the spectrum-agile property is achieved. The central wavelength can be continuously tuned with a range of ∼20 nm, and the tuning range of the full width at half maximum linewidth, which is closely related to the central wavelength, covers ∼1.1 to ∼2.7 times of the minimum linewidth.
About an adaptively weighted Kaplan-Meier estimate.
Plante, Jean-François
2009-09-01
The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
Simple Form of MMSE Estimator for Super-Gaussian Prior Densities
NASA Astrophysics Data System (ADS)
Kittisuwan, Pichid
2015-04-01
The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.
Optimal Interception of a Maneuvering Long-range Missile
NASA Astrophysics Data System (ADS)
X. Vinh, Nguyen; T. Kabamba, Pierre; Takehira, Tetsuya
2001-01-01
In a Newtonian central force field, the minimum-fuel interception of a satellite, or a ballistic missile, in elliptic trajectory can be obtained via Lawden's theory of primer vector. To secure interception when the target performs evasive maneuvers, a new control law, with explicit solutions, is implemented. It is shown that by a rotation of coordinate system, the problem of three-dimensional interception is reduced to a planar problem. The general case of planar interception of a long-range ballistic missile is then studied. Examples of interception at a specified time, head-on interception and minimum-fuel interception are presented. In each case, the requirement for the thrust acceleration is expressed explicitly as a function of time.
Optical effects of the cranium in trans-cranial in vivo two photon laser scanning microscopy in mice
NASA Astrophysics Data System (ADS)
Helm, P. Johannes; Ottersen, Ole P.; Nase, Gabriele
2007-02-01
The combination of multi photon laser scanning microscopy with transgenic techniques has set the stage for in vivo studies of long term dynamics of the central nervous system in mice. Brain structures located within 100μm to 200μm below the brain surface can be observed minimum-invasively during the post-adolescent life of the animal. However, even when selecting the most appropriate microscope optics available for the purpose, trans-cranial observation is compromised by the aberrations induced by the cranium and the tissue interposed between the cranium and the actual focus. It still is an un-resolved task to calculate these aberrational effects or to, at least, estimate quantitatively the distortions they induce onto the recorded images. Here, we report about measurements of the reflection, the absorption, and the effects on the objective point spread function of the mouse cranium as a function of the thickness of the cranium, the locus of trans-cranial observation and the wavelength. There is experimental evidence for pronounced Second Harmonic Generation (SHG) effects.
Transfer-matrix study of a hard-square lattice gas with two kinds of particles and density anomaly
NASA Astrophysics Data System (ADS)
Oliveira, Tiago J.; Stilck, Jürgen F.
2015-09-01
Using transfer matrix and finite-size scaling methods, we study the thermodynamic behavior of a lattice gas with two kinds of particles on the square lattice. Only excluded volume interactions are considered, so that the model is athermal. Large particles exclude the site they occupy and its four first neighbors, while small particles exclude only their site. Two thermodynamic phases are found: a disordered phase where large particles occupy both sublattices with the same probability and an ordered phase where one of the two sublattices is preferentially occupied by them. The transition between these phases is continuous at small concentrations of the small particles and discontinuous at larger concentrations, both transitions are separated by a tricritical point. Estimates of the central charge suggest that the critical line is in the Ising universality class, while the tricritical point has tricritical Ising (Blume-Emery-Griffiths) exponents. The isobaric curves of the total density as functions of the fugacity of small or large particles display a minimum in the disordered phase.
Slew maneuvers on the SCOLE Laboratory Facility
NASA Technical Reports Server (NTRS)
Williams, Jeffrey P.
1987-01-01
The Spacecraft Control Laboratory Experiment (SCOLE) was conceived to provide a physical test bed for the investigation of control techniques for large flexible spacecraft. The control problems studied are slewing maneuvers and pointing operations. The slew is defined as a minimum time maneuver to bring the antenna line-of-sight (LOS) pointing to within an error limit of the pointing target. The second objective is to rotate about the LOS within the 0.02 degree error limit. The SCOLE problem is defined as two design challenges: control laws for a mathematical model of a large antenna attached to the Space Shuttle by a long flexible mast; and a control scheme on a laboratory representation of the structure modelled on the control laws. Control sensors and actuators are typical of those which the control designer would have to deal with on an actual spacecraft. Computational facilities consist of microcomputer based central processing units with appropriate analog interfaces for implementation of the primary control system, and the attitude estimation algorithm. Preliminary results of some slewing control experiments are given.
An opening criterion for dust gaps in protoplanetary discs
NASA Astrophysics Data System (ADS)
Dipierro, Giovanni; Laibe, Guillaume
2017-08-01
We aim to understand under which conditions a low-mass planet can open a gap in viscous dusty protoplanetary discs. For this purpose, we extend the theory of dust radial drift to include the contribution from the tides of an embedded planet and from the gas viscous forces. From this formalism, we derive (I) a grain-size-dependent criterion for dust gap opening in discs, (II) an estimate of the location of the outer edge of the dust gap and (III) an estimate of the minimum Stokes number above which low-mass planets are able to carve gaps that appear only in the dust disc. These analytical estimates are particularly helpful to appraise the minimum mass of a hypothetical planet carving gaps in discs observed at long wavelengths and high resolution. We validate the theory against 3D smoothed particle hydrodynamics simulations of planet-disc interaction in a broad range of dusty protoplanetary discs. We find a remarkable agreement between the theoretical model and the numerical experiments.
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
12 CFR Appendix A to Subpart A of... - Appendix A to Subpart A of Part 327
Code of Federal Regulations, 2010 CFR
2010-01-01
... pricing multipliers are derived from: • A model (the Statistical Model) that estimates the probability..., which is four basis points higher than the minimum rate. II. The Statistical Model The Statistical Model... to 1997. As a result, and as described in Table A.1, the Statistical Model is estimated using a...
PROPERTY APPRAISAL PROVIDES CONTROL, INSURANCE BASIS, AND VALUE ESTIMATE.
ERIC Educational Resources Information Center
THOMSON, JACK
A COMPLETE PROPERTY APPRAISAL SERVES AS A BASIS FOR CONTROL, INSURANCE AND VALUE ESTIMATE. A PROFESSIONAL APPRAISAL FIRM SHOULD PERFORM THIS FUNCTION BECAUSE (1) IT IS FAMILIAR WITH PROPER METHODS, (2) IT CAN PREPARE THE REPORT WITH MINIMUM CONFUSION AND INTERRRUPTION OF THE COLLEGE OPERATION, (3) USE OF ITS PRICING LIBRARY REDUCES TIME NEEDED AND…
NASA Astrophysics Data System (ADS)
McLaughlin, W. N.; Hopkins, S. S.
2013-12-01
Central Asia lies at a nexus both in terms of geology and evolutionary biogeography. With the convergence of the Indian and Asian plates creating high rates of deformation over broad regions, shortening of the Paleozoic and Mesozoic basement rocks has created a rich history of late Cenozoic sedimentary basins. In fact, Kyrgyzstan is the most seismically active country in the world. Additionally, Central Asia is a biogeographic crossroads, facilitating the intercontinental migrations of distant faunas from North American, Europe, Africa, and Southern Asia. With such an active geologic and biological evolution, the usefulness of temporal constraints is apparent. However, the continental collision environment has provided few volcanic rocks suitable for radiometric dating. Therefore, while less precise, the biostratigraphic analysis of Central Asia I present is an ideal method for both establishing ages and correlating between disparate basins. The last several decades provided great advancements in quantitative biostratigraphic methods applied to marine microfossils from drill cores. While these newer methods such as RASC (ranking and scaling), its sister program CASC, and CONOP (constrained optimization) provide a clear improvement over older methods such as graphic correlation, they have yet to be applied to terrestrial vertebrate faunas. Graphic correlation only allows comparison between two stratigraphic columns at a time and is heavily weighted by the initial selection of a type section. Both RASC and CONOP compare all stratigraphic sections simultaneously, eliminating type section bias. Previous vertebrate biostratigraphy methods attempted to predict FADs and LADs with the assumption they are generally minimum estimates. RASC instead establishes average stratigraphic ranges for each taxon and with CASC actually provides confidence intervals for each prediction, reducing the potential error resulting from reworking. CONOP generates maximum stratigraphic ranges observed in all sections, yet also includes error bars for the estimates of each biological event such as an extinction or origination. Used in conjunction, RASC, CASC, and CONOP provide both a solid evaluation of land mammal ages or zones for Central Asia and a predictive composite column for new late Cenozoic fossil localities. With a high degree of endemicity and migration, Central Asia cannot rely on the European Neogene Mammal Zones. This study aims to support and evaluate the emerging Asian biostratigraphic and geochronologic framework. With little fossil material currently collected from Kyrgyzstan, this study also sets a temporal framework for future paleontological work. Material is included from countries with much better constrained biostratigraphic records, preferably associated with existing radiometric dates. Specifically included were sites from Asiatic Russia, Mongolia, Western China, India, and Nepal. This geographic range is selected both to preserve the signal of faunas endemic to the Himalayan and Tibetan highlands, but also to provide a large enough sample to account for the well-known problems with the terrestrial fossil record such as high sampling errors and diachrony.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2013-01-01
Examined are the annual averages, 10-year moving averages, decadal averages, and sunspot cycle (SC) length averages of the mean, maximum, and minimum surface air temperatures and the diurnal temperature range (DTR) for the Armagh Observatory, Northern Ireland, during the interval 1844-2012. Strong upward trends are apparent in the Armagh surface-air temperatures (ASAT), while a strong downward trend is apparent in the DTR, especially when the ASAT data are averaged by decade or over individual SC lengths. The long-term decrease in the decadaland SC-averaged annual DTR occurs because the annual minimum temperatures have risen more quickly than the annual maximum temperatures. Estimates are given for the Armagh annual mean, maximum, and minimum temperatures and the DTR for the current decade (2010-2019) and SC24.
The effect of atmospheric drag on the design of solar-cell power systems for low Earth orbit
NASA Technical Reports Server (NTRS)
Kyser, A. C.
1983-01-01
The feasibility of reducing the atmospheric drag of low orbit solar powered satellites by operating the solar-cell array in a minimum-drag attitude, rather than in the conventional Sun pointing attitude was determined. The weights of the solar array, the energy storage batteries, and the fuel required to overcome the drag of the solar array for a range of design life times in orbit were considered. The drag of the array was estimated by free molecule flow theory, and the system weights were calculated from unit weight estimates for 1990 technology. The trailing, minimum drag system was found to require 80% more solar array area, and 30% more battery capacity, the system weights for reasonable life times were dominated by the thruster fuel requirements.
Estimates of the absolute error and a scheme for an approximate solution to scheduling problems
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2009-02-01
An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.
Konty, Kevin J.; Van Wye, Gretchen; Barbot, Oxiris; Hadler, James L.; Linos, Natalia; Bassett, Mary T.
2016-01-01
Objectives. To assess potential reductions in premature mortality that could have been achieved in 2008 to 2012 if the minimum wage had been $15 per hour in New York City. Methods. Using the 2008 to 2012 American Community Survey, we performed simulations to assess how the proportion of low-income residents in each neighborhood might change with a hypothetical $15 minimum wage under alternative assumptions of labor market dynamics. We developed an ecological model of premature death to determine the differences between the levels of premature mortality as predicted by the actual proportions of low-income residents in 2008 to 2012 and the levels predicted by the proportions of low-income residents under a hypothetical $15 minimum wage. Results. A $15 minimum wage could have averted 2800 to 5500 premature deaths between 2008 and 2012 in New York City, representing 4% to 8% of total premature deaths in that period. Most of these avertable deaths would be realized in lower-income communities, in which residents are predominantly people of color. Conclusions. A higher minimum wage may have substantial positive effects on health and should be considered as an instrument to address health disparities. PMID:27077350
Tsao, Tsu-Yu; Konty, Kevin J; Van Wye, Gretchen; Barbot, Oxiris; Hadler, James L; Linos, Natalia; Bassett, Mary T
2016-06-01
To assess potential reductions in premature mortality that could have been achieved in 2008 to 2012 if the minimum wage had been $15 per hour in New York City. Using the 2008 to 2012 American Community Survey, we performed simulations to assess how the proportion of low-income residents in each neighborhood might change with a hypothetical $15 minimum wage under alternative assumptions of labor market dynamics. We developed an ecological model of premature death to determine the differences between the levels of premature mortality as predicted by the actual proportions of low-income residents in 2008 to 2012 and the levels predicted by the proportions of low-income residents under a hypothetical $15 minimum wage. A $15 minimum wage could have averted 2800 to 5500 premature deaths between 2008 and 2012 in New York City, representing 4% to 8% of total premature deaths in that period. Most of these avertable deaths would be realized in lower-income communities, in which residents are predominantly people of color. A higher minimum wage may have substantial positive effects on health and should be considered as an instrument to address health disparities.
Prevention of blood-borne HIV transmission using a decentralized approach in Shaba, Zaire.
Laleman, G; Magazani, K; Perriëns, J H; Badibanga, N; Kapila, N; Konde, M; Selemani, U; Piot, P
1992-11-01
To prevent blood transfusion-acquired HIV infection with a decentralized approach to HIV screening of blood donors, using an instrument-free rapid test. Shaba province, Zaire (496,877 km2). The programme consisted of training health-care workers, distribution of a rapid HIV-antibody test (DuPont's HIVCHEK) for screening of all blood donations, and quality control of testing by a regional reference centre. Over a 2-year period, 11,940 rapid tests were distributed to 37 hospitals, covering 75% of all hospital beds outside the copper mine's health system in Shaba. Eighty-five per cent of the tests were used to screen blood donors (5.4% positive test rate) and 13% to test patients (39.7% positive test rate). At least 265 cases of HIV-positive blood donation were prevented, at an estimated cost of 137-279 ECU per case. Only 26% of initially positive specimens reached the central laboratory for supplemental testing, and sterile transfusion equipment and blood-grouping reagents were frequently unavailable. The lack of transport and communications and a deteriorating health system were major constraints. District hospitals in Africa are often long distances from major cities, difficult to reach for most of the year, and perform a small number of transfusions. In this context a classical centralized regional blood bank may not be a feasible option to ensure safe blood transfusions. However, safe blood transfusion can be achieved with a decentralized approach using a rapid test, provided that minimum standards of health-care services are available.
Tangborn, Wendell V.
1980-01-01
Snowmelt runoff is forecast with a statistical model that utilizes daily values of stream discharge, gaged precipitation, and maximum and minimum observations of air temperature. Synoptic observations of these variables are made at existing low- and medium-altitude weather stations, thus eliminating the difficulties and expense of new, high-altitude installations. Four model development steps are used to demonstrate the influence on prediction accuracy of basin storage, a preforecast test season, air temperature (to estimate ablation), and a prediction based on storage. Daily ablation is determined by a technique that employs both mean temperature and a radiative index. Radiation (both long- and short-wave components) is approximated by using the range in daily temperature, which is shown to be closely related to mean cloud cover. A technique based on the relationship between prediction error and prediction season weather utilizes short-term forecasts of precipitation and temperature to improve the final prediction. Verification of the model is accomplished by a split sampling technique for the 1960–1977 period. Short- term (5–15 days) predictions of runoff throughout the main snowmelt season are demonstrated for mountain drainages in western Washington, south-central Arizona, western Montana, and central California. The coefficient of prediction (Cp) based on actual, short-term predictions for 18 years is for Thunder Creek (Washington), 0.69; for South Fork Flathead River (Montana), 0.45; for the Black River (Arizona), 0.80; and for the Kings River (California), 0.80.
Sensitivity and specificity of auditory steady‐state response testing
Rabelo, Camila Maia; Schochat, Eliane
2011-01-01
INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442
Ge, Jiwen; Wu, Shuyuan; Touré, Dado; Cheng, Lamei; Miao, Wenjie; Cao, Huafen; Pan, Xiaoying; Li, Jianfeng; Yao, Minmin; Feng, Liang
2017-12-01
The main purpose of this study conducted from August 2010 was to find biomass and productivity of epilithic algae and their relations to environmental factors and try to explore the restrictive factors affecting the growth of algae in the Gufu River, the one of the branches of Xiangxi River located in the Three Gorges Reservoir of the Yangtze River, Hubei Province, Central China. An improved method of in situ primary productivity measurement was utilized to estimate the primary production of the epilithic algae. It was shown that in rivers, lakes, and reservoirs, algae are the main primary producers and have a central role in the ecosystem. Chlorophyll a concentration and ash-free dry mass (AFDM) were estimated for epilithic algae of the Gufu River basin in Three Gorges Reservoir area. Environmental factors in the Gufu River ecosystem highlighted differences in periphyton chlorophyll a ranging from 1.49 mg m -2 (origin) to 69.58 mg m -2 (terminal point). The minimum and maximum gross primary productivity of epilithic algae were 96.12 and 1439.89 mg C m -2 day -1 , respectively. The mean net primary productivity was 290.24 mg C m -2 day -1 . The mean autotrophic index (AFDM:chlorophyll a) was 407.40. The net primary productivity, community respiration ratio (P/R ratio) ranged from 0.98 to 9.25 with a mean of 2.76, showed that autotrophic productivity was dominant in the river. Relationship between physicochemical characteristics and biomass was discussed through cluster and stepwise regression analysis which indicated that altitude, total nitrogen (TN), NO 3 - -N, and NH 4 + -N were significant environmental factors affecting the biomass of epilithic algae. However, a negative logarithmic relationship between altitude and the chlorophyll a of epilithic algae was high. The results also highlighted the importance of epilithic algae in maintaining the Gufu River basin ecosystems health.
Estimation of Flood-Frequency Discharges for Rural, Unregulated Streams in West Virginia
Wiley, Jeffrey B.; Atkins, John T.
2010-01-01
Flood-frequency discharges were determined for 290 streamgage stations having a minimum of 9 years of record in West Virginia and surrounding states through the 2006 or 2007 water year. No trend was determined in the annual peaks used to calculate the flood-frequency discharges. Multiple and simple least-squares regression equations for the 100-year (1-percent annual-occurrence probability) flood discharge with independent variables that describe the basin characteristics were developed for 290 streamgage stations in West Virginia and adjacent states. The regression residuals for the models were evaluated and used to define three regions of the State, designated as Eastern Panhandle, Central Mountains, and Western Plateaus. Exploratory data analysis procedures identified 44 streamgage stations that were excluded from the development of regression equations representative of rural, unregulated streams in West Virginia. Regional equations for the 1.1-, 1.5-, 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year flood discharges were determined by generalized least-squares regression using data from the remaining 246 streamgage stations. Drainage area was the only significant independent variable determined for all equations in all regions. Procedures developed to estimate flood-frequency discharges on ungaged streams were based on (1) regional equations and (2) drainage-area ratios between gaged and ungaged locations on the same stream. The procedures are applicable only to rural, unregulated streams within the boundaries of West Virginia that have drainage areas within the limits of the stations used to develop the regional equations (from 0.21 to 1,461 square miles in the Eastern Panhandle, from 0.10 to 1,619 square miles in the Central Mountains, and from 0.13 to 1,516 square miles in the Western Plateaus). The accuracy of the equations is quantified by measuring the average prediction error (from 21.7 to 56.3 percent) and equivalent years of record (from 2.0 to 70.9 years).
Cox, Murray P.; Mendez, Fernando L.; Karafet, Tatiana M.; Pilkington, Maya Metni; Kingan, Sarah B.; Destro-Bisol, Giovanni; Strassmann, Beverly I.; Hammer, Michael F.
2008-01-01
A 2.4-kb stretch within the RRM2P4 region of the X chromosome, previously sequenced in a sample of 41 globally distributed humans, displayed both an ancient time to the most recent common ancestor (e.g., a TMRCA of ∼2 million years) and a basal clade composed entirely of Asian sequences. This pattern was interpreted to reflect a history of introgressive hybridization from archaic hominins (most likely Asian Homo erectus) into the anatomically modern human genome. Here, we address this hypothesis by resequencing the 2.4-kb RRM2P4 region in 131 African and 122 non-African individuals and by extending the length of sequence in a window of 16.5 kb encompassing the RRM2P4 pseudogene in a subset of 90 individuals. We find that both the ancient TMRCA and the skew in non-African representation in one of the basal clades are essentially limited to the central 2.4-kb region. We define a new summary statistic called the minimum clade proportion (pmc), which quantifies the proportion of individuals from a specified geographic region in each of the two basal clades of a binary gene tree, and then employ coalescent simulations to assess the likelihood of the observed central RRM2P4 genealogy under two alternative views of human evolutionary history: recent African replacement (RAR) and archaic admixture (AA). A molecular-clock-based TMRCA estimate of 2.33 million years is a statistical outlier under the RAR model; however, the large variance associated with this estimate makes it difficult to distinguish the predictions of the human origins models tested here. The pmc summary statistic, which has improved power with larger samples of chromosomes, yields values that are significantly unlikely under the RAR model and fit expectations better under a range of archaic admixture scenarios. PMID:18202385
Kriticos, Darren J.; Brunel, Sarah; Ota, Noboru; Fried, Guillaume; Oude Lansink, Alfons G. J. M.; Panetta, F. Dane; Prasad, T. V. Ramachandra; Shabbir, Asad; Yaacoby, Tuvia
2015-01-01
Pest Risk Assessments (PRAs) routinely employ climatic niche models to identify endangered areas. Typically, these models consider only climatic factors, ignoring the ‘Swiss Cheese’ nature of species ranges due to the interplay of climatic and habitat factors. As part of a PRA conducted for the European and Mediterranean Plant Protection Organization, we developed a climatic niche model for Parthenium hysterophorus, explicitly including the effects of irrigation where it was known to be practiced. We then downscaled the climatic risk model using two different methods to identify the suitable habitat types: expert opinion (following the EPPO PRA guidelines) and inferred from the global spatial distribution. The PRA revealed a substantial risk to the EPPO region and Central and Western Africa, highlighting the desirability of avoiding an invasion by P. hysterophorus. We also consider the effects of climate change on the modelled risks. The climate change scenario indicated the risk of substantial further spread of P. hysterophorus in temperate northern hemisphere regions (North America, Europe and the northern Middle East), and also high elevation equatorial regions (Western Brazil, Central Africa, and South East Asia) if minimum temperatures increase substantially. Downscaling the climate model using habitat factors resulted in substantial (approximately 22–53%) reductions in the areas estimated to be endangered. Applying expert assessments as to suitable habitat classes resulted in the greatest reduction in the estimated endangered area, whereas inferring suitable habitats factors from distribution data identified more land use classes and a larger endangered area. Despite some scaling issues with using a globally conformal Land Use Systems dataset, the inferential downscaling method shows promise as a routine addition to the PRA toolkit, as either a direct model component, or simply as a means of better informing an expert assessment of the suitable habitat types. PMID:26325680
Psychological and educational interventions for subfertile men and women.
Verkuijlen, Jolijn; Verhaak, Christianne; Nelen, Willianne L D M; Wilkinson, Jack; Farquhar, Cindy
2016-03-31
Approximately one-fifth of all subfertile couples seeking fertility treatment show clinically relevant levels of anxiety, depression, or distress. Psychological and educational interventions are frequently offered to subfertile couples, but their effectiveness, both in improving mental health and pregnancy rates, is unclear. To assess the effectiveness of psychological and educational interventions for subfertile couples on psychological and fertility treatment outcomes. We searched (from inception to 2 April 2015) the Cochrane Gynaecology and Fertility Group Specialised Register of Controlled Trials, the Cochrane Central Register of Controlled Trials (CENTRAL; Issue 2, 2015), MEDLINE, EMBASE, PsycINFO, EBSCO CINAHL, DARE, Web of Science, OpenGrey, LILACS, PubMed, and ongoing trials registers. We handsearched reference lists and contacted experts in the field. We included published and unpublished randomised controlled trials (RCTs), cluster randomised trials, and cross-over trials (first phase) evaluating the effectiveness of psychological and educational interventions on psychological and fertility treatment outcomes in subfertile couples. Two review authors independently assessed trial risk of bias and extracted data. We contacted study authors for additional information. Our primary outcomes were psychological measures (anxiety and depression) and fertility rates (live birth or ongoing pregnancy). We assessed the overall quality of the evidence using GRADE criteria.As we did not consider the included studies to be sufficiently similar to permit meaningful pooling, we summarised the results of the individual studies by presenting the median and interquartile range (IQR) of effects as well as the minimum and maximum values. We calculated standardised mean differences (SMDs) for continuous variables and odds ratios (ORs) for dichotomous outcomes. We included 39 studies involving 4925 participants undergoing assisted reproductive technology. Studies were heterogeneous with respect to a number of factors, including nature and duration of interventions, participants, and comparator groups. As a result, we judged that pooling results would not result in a clinically meaningful estimate of a treatment effect. There were substantial methodological weaknesses in the studies, all of which were judged to be at high risk of bias for one or more quality assessment domains. There was concern about attrition bias (24 studies), performance bias for psychological outcomes (27 studies) and fertility outcomes (18 studies), and detection bias for psychological outcomes (26 studies). We therefore considered study-specific estimates of intervention effects to be unreliable. Thirty-three studies reported the outcome mental health. Only two studies reported the outcome live birth, and both of these had substantial attrition. One study reported ongoing pregnancy, again with substantial attrition. We have combined live birth and ongoing pregnancy in one outcome. Psychological outcomesStudies utilised a variety of measures of anxiety and depression. In all cases a low score denoted benefit from the intervention.SMDs for anxiety were as follows: psychological interventions versus attentional control or usual care: median (IQR) = -0.30 (-0.84 to 0.00), minimum value -5.13; maximum value 0.84, 17 RCTs, 2042 participants; educational interventions versus attentional control or usual care: median = 0.03, minimum value -0.38; maximum value 0.23, 4 RCTs, 330 participants.SMDs for depression were as follows: psychological interventions versus attentional control or usual care: median (IQR) = -0.45 (-0.68 to -0.08), minimum value -3.01; maximum value 1.23, 12 RCTs, 1160 participants; educational interventions versus attentional control or usual care: median = -0.33, minimum value -0.46; maximum value 0.17, 3 RCTs, 304 participants. Fertility outcomesWhen psychological interventions were compared with attentional control or usual care, ORs for live birth or ongoing pregnancy ranged from minimum value 1.13 to maximum value 10.05. No studies of educational interventions reported this outcome. The effects of psychological and educational interventions on mental health including distress, and live birth or ongoing pregnancy rates is uncertain due to the very low quality of the evidence. Existing trials of psychological and educational interventions for subfertility were generally poorly designed and executed, resulting in very serious risk of bias and serious inconsistency in study findings. There is a need for studies employing appropriate methodological techniques to investigate the benefits of these treatments for this population. In particular, attentional control groups should be employed, that is groups receiving a treatment that mimics the amount of time and attention received by the treatment group but is not thought to have a specific effect upon the participants, in order to distinguish between therapeutic and non-specific effects of interventions. Where attrition cannot be minimised, appropriate statistical techniques for handling drop-out must be applied. Failure to address these issues in study design has resulted in studies that do not provide a valid basis for answering questions about the effectiveness of these interventions.
Active tectonics of the Qom region, Central Iran
NASA Astrophysics Data System (ADS)
Hollingsworth, J.; Fattahi, M.; Jackson, J. A.; Talebian, M.; Nazari, H.; Bahroudi, A.
2009-12-01
Between 50-57°E shortening across the Arabian-Eurasian collision zone is accommodated primarily in the Zagros and Alborz mountains of Iran, which bound the relatively aseismic Central Iranian block. Both the lack of seismicity and the minor variation in GPS velocities across Central Iran suggest this region plays a negligible role in accommodating Arabia-Eurasia shortening at the present day. We examine recent deformation in the Qom region, which lies 100 km south of Tehran within the Central Iran block. This region is notable for a number of large earthquakes over the last 30 years: 1980.12.18 (Mw 6.0), 1980.12.22 (Mw 5.7), and 2007.06.18 (Mw 5.4). Body-waveform modeling of these events indicates N-S shortening on a S-dipping thrust fault which projects to the surface along the Qom thrust. Evidence for longer-term uplift is indicated by the increased topography south of the fault, and the exposure of folded Miocene (U. Red Fmtn) and Late Oligocene (Qom Fmtn) deposits. River incision has resulted in numerous river terraces, and in one location an alluvial fan has been offset across the fault. Four samples were collected from the surface of this fan and their ages determined using OSL dating. The results indicate fan abandonment at ~30 kybp. A DEM of the fan was produced using kinematic GPS surveying data, from which 1.0±0.3 m vertical offset was measured. A minimum uplift rate of 0.02 mm/yr and a minimum shortening rate of 0.01 mm/yr are obtained. If the age of the lower (and youngest) terrace is 10 ky, as is typically seen in other locations throughout Iran, the likely range of uplift rates are 0.02-0.2 mm/yr and shortening rates 0.01-0.2 mm/yr. North of Qom city, U. Red Fmtn deposits have been folded into an asymmetric N-verging anticline known as the Alborz anticline. Seismic, well and surface data all indicate this structure has formed as a fault-bend fold above a decollement at 3 km depth which ramps to the surface along the northern limit of the fold. A balanced cross section indicates ~18% shortening (1.5 km) in a period bracketed by the Upper Red Fmtn (<18 Ma) and the Pliocene (>5.3 Ma), yielding shortening rates of 0.1-0.3 mm/yr. The right-lateral Kashan fault lies SE of the Qom region, and appears to be kinematically linked to the thrust faults around Qom, which probably represent thrust terminations. Historical earthquakes have occurred on the Kashan fault, and clear evidence for recent movement is seen in the Quaternary geomorphology. Reconstruction of the geology across the Kashan fault indicates ~45 km of total right-lateral motion, which suggests it has played a significant role in the accommodation of regional shortening. Late Cenozoic estimates of N-S shortening in the Qom region are 0.03-0.5 mm/yr. The difference in GPS velocities north and south of Qom indicates 1.1±1.9 mm/yr shortening across this region. This study suggests that Central Iran plays an important role in accommodating Arabia-Eurasia shortening over Quaternary to geological timescales. Efforts should be made to better constrain the seismic hazard posed by active faults to large populations in the Central Iran region.
32 CFR 286.30 - Collection of fees and fee rates for technical data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...
32 CFR 286.30 - Collection of fees and fee rates for technical data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...
32 CFR 286.30 - Collection of fees and fee rates for technical data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...
32 CFR 286.30 - Collection of fees and fee rates for technical data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...
USDA-ARS?s Scientific Manuscript database
Increased emphasis has been placed on developing agroecosystems that are inherently resistant and resilient to external stressors, yet are highly productive, economically competitive, and environmentally benign. As part of a long-term study to evaluate effects of crop sequence and tillage on crop yi...
NASA Astrophysics Data System (ADS)
Moaveni, Bijan; Khosravi Roqaye Abad, Mahdi; Nasiri, Sayyad
2015-10-01
In this paper, vehicle longitudinal velocity during the braking process is estimated by measuring the wheels speed. Here, a new algorithm based on the unknown input Kalman filter is developed to estimate the vehicle longitudinal velocity with a minimum mean square error and without using the value of braking torque in the estimation procedure. The stability and convergence of the filter are analysed and proved. Effectiveness of the method is shown by designing a real experiment and comparing the estimation result with actual longitudinal velocity computing from a three-axis accelerometer output.
Microdisk Injection Lasers for the 1.27-μm Spectral Range
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kryzhanovskaya, N. V.; Maximov, M. V.; Blokhin, S. A.
2016-03-15
Microdisk injection lasers on GaAs substrates, with a minimum diameter of 15 μm and an active region based on InAs/InGaAs quantum dots, are fabricated. The lasers operate in the continuous-wave mode at room temperature without external cooling. The lasing wavelength is around 1.27 μm at a minimum threshold current of 1.6 mA. The specific thermal resistance is estimated to be 5 × 10–3 °C cm{sup 2}/W.
Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.
2013-01-01
Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.
Extreme Brightness Temperatures and Refractive Substructure in 3C273 with RadioAstron
NASA Astrophysics Data System (ADS)
Johnson, Michael D.; Kovalev, Yuri Y.; Gwinn, Carl R.; Gurvits, Leonid I.; Narayan, Ramesh; Macquart, Jean-Pierre; Jauncey, David L.; Voitsik, Peter A.; Anderson, James M.; Sokolovsky, Kirill V.; Lisakov, Mikhail M.
2016-03-01
Earth-space interferometry with RadioAstron provides the highest direct angular resolution ever achieved in astronomy at any wavelength. RadioAstron detections of the classic quasar 3C 273 on interferometric baselines up to 171,000 km suggest brightness temperatures exceeding expected limits from the “inverse-Compton catastrophe” by two orders of magnitude. We show that at 18 cm, these estimates most likely arise from refractive substructure introduced by scattering in the interstellar medium. We use the scattering properties to estimate an intrinsic brightness temperature of 7× {10}12 {{K}}, which is consistent with expected theoretical limits, but which is ˜15 times lower than estimates that neglect substructure. At 6.2 cm, the substructure influences the measured values appreciably but gives an estimated brightness temperature that is comparable to models that do not account for the substructure. At 1.35 {{cm}}, the substructure does not affect the extremely high inferred brightness temperatures, in excess of {10}13 {{K}}. We also demonstrate that for a source having a Gaussian surface brightness profile, a single long-baseline estimate of refractive substructure determines an absolute minimum brightness temperature, if the scattering properties along a given line of sight are known, and that this minimum accurately approximates the apparent brightness temperature over a wide range of total flux densities.
Meier, Petra S.; Holmes, John; Angus, Colin; Ally, Abdallah K.; Meng, Yang; Brennan, Alan
2016-01-01
Introduction While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO “best buy” intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities. Methods and Findings An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as follows: current tax increase, −3.2%; value-based tax, −2.9%; strength-based tax, −6.1%; minimum unit pricing, −7.8%) and lesser impacts among drinkers in professional/managerial occupations (for heavy drinkers: current tax increase, −1.3%; value-based tax, −1.4%; strength-based tax, +0.2%; minimum unit pricing, +0.8%). Results from the PSA give slightly greater mean effects for both the routine/manual (current tax increase, −3.6% [95% uncertainty interval (UI) −6.1%, −0.6%]; value-based tax, −3.3% [UI −5.1%, −1.7%]; strength-based tax, −7.5% [UI −13.7%, −3.9%]; minimum unit pricing, −10.3% [UI −10.3%, −7.0%]) and professional/managerial occupation groups (current tax increase, −1.8% [UI −4.7%, +1.6%]; value-based tax, −1.9% [UI −3.6%, +0.4%]; strength-based tax, −0.8% [UI −6.9%, +4.0%]; minimum unit pricing, −0.7% [UI −5.6%, +3.6%]). Impacts of price changes on moderate drinkers were small regardless of income or socioeconomic group. Analysis of uncertainty shows that the relative effectiveness of the four policies is fairly stable, although uncertainty in the absolute scale of effects exists. Volumetric taxation and minimum unit pricing consistently outperform increasing the current tax or adding an ad valorem tax in terms of reducing mortality among the heaviest drinkers and reducing alcohol-related health inequalities (e.g., in the routine/manual occupation group, volumetric taxation reduces deaths more than increasing the current tax in 26 out of 30 probabilistic runs, minimum unit pricing reduces deaths more than volumetric tax in 21 out of 30 runs, and minimum unit pricing reduces deaths more than increasing the current tax in 30 out of 30 runs). Study limitations include reducing model complexity by not considering a largely ineffective ban on below-tax alcohol sales, special duty rates covering only small shares of the market, and the impact of tax fraud or retailer non-compliance with minimum unit prices. Conclusions Our model estimates that, compared to tax increases under the current system or introducing taxation based on product value, alcohol-content-based taxation or minimum unit pricing would lead to larger reductions in health inequalities across income groups. We also estimate that alcohol-content-based taxation and minimum unit pricing would have the largest impact on harmful drinking, with minimal effects on those drinking in moderation. PMID:26905063
Electron gun for a multiple beam klystron with magnetic compression of the electron beams
Ives, R. Lawrence; Tran, Hien T; Bui, Thuc; Attarian, Adam; Tallis, William; David, John; Forstall, Virginia; Andujar, Cynthia; Blach, Noah T; Brown, David B; Gadson, Sean E; Kiley, Erin M; Read, Michael
2013-10-01
A multi-beam electron gun provides a plurality N of cathode assemblies comprising a cathode, anode, and focus electrode, each cathode assembly having a local cathode axis and also a central cathode point defined by the intersection of the local cathode axis with the emitting surface of the cathode. Each cathode is arranged with its central point positioned in a plane orthogonal to a device central axis, with each cathode central point an equal distance from the device axis and with an included angle of 360/N between each cathode central point. The local axis of each cathode has a cathode divergence angle with respect to the central axis which is set such that the diverging magnetic field from a solenoidal coil is less than 5 degrees with respect to the projection of the local cathode axis onto a cathode reference plane formed by the device axis and the central cathode point, and the local axis of each cathode is also set such that the angle formed between the cathode reference plane and the local cathode axis results in minimum spiraling in the path of the electron beams in a homogenous magnetic field region of the solenoidal field generator.
Optimal estimation of the optomechanical coupling strength
NASA Astrophysics Data System (ADS)
Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André
2018-06-01
We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
Regulating the medical loss ratio: implications for the individual market.
Abraham, Jean M; Karaca-Mandic, Pinar
2011-03-01
To provide state-level estimates of the size and structure of the US individual market for health insurance and to investigate the potential impact of new medical loss ratio (MLR) regulation in 2011, as indicated by the Patient Protection and Affordable Care Act (PPACA). Using data from the National Association of Insurance Commissioners, we provided state-level estimates of the size and structure of the US individual market from 2002 to 2009. We estimated the number of insurers expected to have MLRs below the legislated minimum and their corresponding enrollment. In the case of noncompliant insurers exiting the market, we estimated the number of enrollees that may be vulnerable to major coverage disruption given poor health status. In 2009, using a PPACA-adjusted MLR definition, we estimated that 29% of insurer-state observations in the individual market would have MLRs below the 80% minimum, corresponding to 32% of total enrollment. Nine states would have at least one-half of their health insurers below the threshold. If insurers below the MLR threshold exit the market, major coverage disruption could occur for those in poor health; we estimated the range to be between 104,624 and 158,736 member-years. The introduction of MLR regulation as part of the PPACA has the potential to significantly affect the functioning of the individual market for health insurance.
NASA Technical Reports Server (NTRS)
Gautier, C.
1986-01-01
The evolution of the net shortwave (NSW) radiation fields during the monsoon of 1979 was analyzed, using geostationary satellite data, collected before, during, and after the monsoon onset. It is seen, from the time sequence of NSW fields, that during the preonset phase the characteristics of the NSW field are dominated by a strong maximum in the entire Arabian Sea and by a strong minimum in the central and eastern equatorial Indian Ocean, the minimum being associated with the intense convective activity occurring in that region. As the season evolves, the minima of NSW associated with the large scale convective activity propagate westward in the equatorial ocean. During the monsoon onset, there occurs an explosive onset of the convection activity in the Arabian Sea: the maximum has retreated towards the Somalia coast, and most of the sea then experiences a strong minimum of NSW associated with the intense precipitation occurring along the southwestern coast of the Indian subcontinent.
Optimization of HTST process parameters for production of ready-to-eat potato-soy snack.
Nath, A; Chattopadhyay, P K; Majumdar, G C
2012-08-01
Ready-to-eat (RTE) potato-soy snacks were developed using high temperature short time (HTST) air puffing process and the process was found to be very useful for production of highly porous and light texture snack. The process parameters considered viz. puffing temperature (185-255 °C) and puffing time (20-60 s) with constant initial moisture content of 36.74% and air velocity of 3.99 m.s(-1) for potato-soy blend with varying soy flour content from 5% to 25% were investigated using response surface methodology following central composite rotatable design (CCRD). The optimum product in terms of minimum moisture content (11.03% db), maximum expansion ratio (3.71), minimum hardness (2,749.4 g), minimum ascorbic acid loss (9.24% db) and maximum overall acceptability (7.35) were obtained with 10.0% soy flour blend in potato flour at the process conditions of puffing temperature (231.0 °C) and puffing time (25.0 s).
NASA Astrophysics Data System (ADS)
Bachman, Richard T.; Hamilton, Edwin L.; Curray, Joseph R.
1983-11-01
Supplement is available with entire article on microfiche. Order from American Geophysical Union, 2000 Florida Avenue, N.W., Washington, DC 20009. Document B83-007; $2.50. Payment must accompany order. Measurements of mean sound velocities in the first, largely unlithified layers in the seafloor were made using the sonobuoy technique in several areas in the northern Indian Ocean. Older measurements were added to new measurements, and regressions for mean and instantaneous velocity versus one-way travel time of sound are presented for the central Bengal Fan, the central Andaman Sea Basin, the Nicobar Fan, and the Sunda Trench. New data and regression equations are presented for the Mergui-north Sumatra Basin and for four forearc basins between Sumatra and Java and the Sunda Trench. Minimum velocity gradients were found in those areas where sedimentation rates were high, and sediments have accumulated in thick sections which have not had time to fully consolidate (porosity in the top of the sediment section has not been fully reduced under overburden pressure). These minimum velocity gradients (just under the seafloor) were found in the four forearc basins where they ranged from 0.34 s-1 to 0.84 s-1 with an average of 0.58 s-1. The near-surface velocity gradient in the Sunda Trench was 1.33 s-1, but was higher in the adjacent, fossil Nicobar Fan (1.62 s-1). In the surface of the Bengal Fan the velocity gradient was low in the upper fan (0.86 s-1), high in the central fan (1.94 s-1), and again lower in the southern fan (1.18 s-1), which may support sedimentation models calling for bypassing of the central fan and higher rates of accumulation on the southern fan.
Assessment of crustal velocity models using seismic refraction and reflection tomography
NASA Astrophysics Data System (ADS)
Zelt, Colin A.; Sain, Kalachand; Naumenko, Julia V.; Sawyer, Dale S.
2003-06-01
Two tomographic methods for assessing velocity models obtained from wide-angle seismic traveltime data are presented through four case studies. The modelling/inversion of wide-angle traveltimes usually involves some aspects that are quite subjective. For example: (1) identifying and including later phases that are often difficult to pick within the seismic coda, (2) assigning specific layers to arrivals, (3) incorporating pre-conceived structure not specifically required by the data and (4) selecting a model parametrization. These steps are applied to maximize model constraint and minimize model non-uniqueness. However, these steps may cause the overall approach to appear ad hoc, and thereby diminish the credibility of the final model. The effect of these subjective choices can largely be addressed by estimating the minimum model structure required by the least subjective portion of the wide-angle data set: the first-arrival times. For data sets with Moho reflections, the tomographic velocity model can be used to invert the PmP times for a minimum-structure Moho. In this way, crustal velocity and Moho models can be obtained that require the least amount of subjective input, and the model structure that is required by the wide-angle data with a high degree of certainty can be differentiated from structure that is merely consistent with the data. The tomographic models are not intended to supersede the preferred models, since the latter model is typically better resolved and more interpretable. This form of tomographic assessment is intended to lend credibility to model features common to the tomographic and preferred models. Four case studies are presented in which a preferred model was derived using one or more of the subjective steps described above. This was followed by conventional first-arrival and reflection traveltime tomography using a finely gridded model parametrization to derive smooth, minimum-structure models. The case studies are from the SE Canadian Cordillera across the Rocky Mountain Trench, central India across the Narmada-Son lineament, the Iberia margin across the Galicia Bank, and the central Chilean margin across the Valparaiso Basin and a subducting seamount. These case studies span the range of modern wide-angle experiments and data sets in terms of shot-receiver spacing, marine and land acquisition, lateral heterogeneity of the study area, and availability of wide-angle reflections and coincident near-vertical reflection data. The results are surprising given the amount of structure in the smooth, tomographically derived models that is consistent with the more subjectively derived models. The results show that exploiting the complementary nature of the subjective and tomographic approaches is an effective strategy for the analysis of wide-angle traveltime data.
Is the difference between chemical and numerical estimates of baseflow meaningful?
NASA Astrophysics Data System (ADS)
Cartwright, Ian; Gilfedder, Ben; Hofmann, Harald
2014-05-01
Both chemical and numerical techniques are commonly used to calculate baseflow inputs to gaining rivers. In general the chemical methods yield lower estimates of baseflow than the numerical techniques. In part, this may be due to the techniques assuming two components (event water and baseflow) whereas there may also be multiple transient stores of water. Bank return waters, interflow, or waters stored on floodplains are delayed components that may be geochemically similar to the surface water from which they are derived; numerical techniques may record these components as baseflow whereas chemical mass balance studies are likely to aggregate them with the surface water component. This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. While more sophisticated techniques exist, these methods of estimating baseflow are readily applied with the available data and have been used widely elsewhere. During the early stages of high-discharge events, chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those from chemical mass balance using Cl calculated from continuous electrical conductivity. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of annual discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of annual discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge). These differences most probably reflect how the different techniques characterise the transient water sources in this catchment. The local minimum and recursive digital filters aggregate much of the water from delayed sources as baseflow. However, as many of these delayed transient water stores (such as bank return flow, floodplain storage, or interflow) have Cl concentrations that are similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The difference between the estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months at that time. Cl vs. discharge variations during individual flow events also demonstrate that inflows of high-salinity older water occurs on the rising limbs of hydrographs followed by inflows of low-salinity water from the transient stores as discharge falls. The use of complementary techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.
Impact of calibration on estimates of central blood pressures.
Soender, T K; Van Bortel, L M; Møller, J E; Lambrechtsen, J; Hangaard, J; Egstrup, K
2012-12-01
Using the Sphygmocor device it is recommended that the radial pressure wave is calibrated for brachial systolic blood pressure (SBP) and diastolic blood pressure (DBP). However it has been suggested that brachial-to-radial pressure amplification causes underestimation of central blood pressures (BPs) using this calibration. In the present study we examined if different calibrations had an impact on estimates of central BPs and on the clinical interpretation of our results. On the basis of ambulatory BP measurements, patients were categorized into patients with controlled, uncontrolled or resistant hypertension. We first calibrated the radial pressure wave as recommended and afterwards recalibrated the same pressure wave using brachial DBP and calculated mean arterial pressure. Recalibration of the pressure wave generated significantly higher estimates of central SBP (P=0.0003 and P<0.0001 at baseline and P=0.0001 and P=0.0002 after 6 months). Using recommended calibration we found a significant change in central SBP in both treatment groups (P=0.05 and P=0.01), however, after recalibrating significance was lost in patients with resistant hypertension (P=0.15). We conclude that calibration with DBP and mean arterial pressure produces higher estimates of central BPs than recommended calibration. The present study also shows that this difference between the two calibration methods can produce more than a systematic error and has an impact on interpretation of clinical results.
USDA-ARS?s Scientific Manuscript database
Two experiments were conducted to evaluate the effect of bait delivery rate on methane emission estimates measured by a GreenFeed system (GFS; C-Lock, Inc., Rapid City, SD). The manufacture recommends that cattle have a minimum visit time of 3 minutes so that at least 3 eructations are captured to ...
D.A. Sampson; T.J. Albaugh; Kurt H. Johnsen; H.L. Allen; Stanley J. Zarnoch
2003-01-01
Abstract: Leaf area index (LAI) of loblolly pine (Pinus taeda L.) trees of the southern United States varies almost twofold interannually; loblolly pine, essentially, carries two foliage cohorts at peak LAI (September) and one at minimum (MarchâApril). Herein, we present an approach that may be site invariant to estimate monthly...
Bernard R. Parresol
1993-01-01
In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...
Anticipating Cycle 24 Minimum and its Consequences: An Update
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
This Technical Publication updates estimates for cycle 24 minimum and discusses consequences associated with cycle 23 being a longer than average period cycle and cycle 24 having parametric minimum values smaller (or larger for the case of spotless days) than long term medians. Through December 2007, cycle 23 has persisted 140 mo from its 12-mo moving average (12-mma) minimum monthly mean sunspot number occurrence date (May 1996). Longer than average period cycles of the modern era (since cycle 12) have minimum-to-minimum periods of about 139.0+/-6.3 mo (the 90-percent prediction interval), inferring that cycle 24 s minimum monthly mean sunspot number should be expected before July 2008. The major consequence of this is that, unless cycle 24 is a statistical outlier (like cycle 21), its maximum amplitude (RM) likely will be smaller than previously forecast. If, however, in the course of its rise cycle 24 s 12-mma of the weighted mean latitude (L) of spot groups exceeds 24 deg, then one expects RM >131, and if its 12-mma of highest latitude (H) spot groups exceeds 38 deg, then one expects RM >127. High-latitude new cycle spot groups, while first reported in January 2008, have not, as yet, become the dominant form of spot groups. Minimum values in L and H were observed in mid 2007 and values are now slowly increasing, a precondition for the imminent onset of the new sunspot cycle.
Price, Stephen F.; Payne, Antony J.; Howat, Ian M.; Smith, Benjamin E.
2011-01-01
We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland’s three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing. PMID:21576500
NASA Astrophysics Data System (ADS)
Heineke, Caroline; Hetzel, Ralf; Akal, Cüneyt; Christl, Marcus
2017-11-01
The functionality and retention capacity of water reservoirs is generally impaired by upstream erosion and reservoir sedimentation, making a reliable assessment of erosion indispensable to estimate reservoir lifetimes. Widely used river gauging methods may underestimate sediment yield, because they do not record rare, high-magnitude events and may underestimate bed load transport. Hence, reservoir lifetimes calculated from short-term erosion rates should be regarded as maximum values. We propose that erosion rates from cosmogenic 10Be, which commonly integrate over hundreds to thousands of years, are useful to complement short-term sediment yield estimates and should be employed to estimate minimum reservoir lifetimes. Here we present 10Be erosion rates for the drainage basins of six water reservoirs in Western Turkey, which are located in a tectonically active region with easily erodible bedrock. Our 10Be erosion rates for these catchments are high, ranging from ˜170 to ˜1,040 t/km2/yr. When linked to reservoir volumes, they yield minimum reservoir lifetimes between 25 ± 5 and 1,650 ± 360 years until complete filling, with four reservoirs having minimum lifespans of ≤110 years. In a neighboring region with more resistant bedrock and less tectonic activity, we obtain much lower catchment-wide 10Be erosion rates of ˜33 to ˜95 t/km2/yr, illustrating that differences in lithology and tectonic boundary conditions can cause substantial variations in erosion even at a spatial scale of only ˜50 km. In conclusion, we suggest that both short-term sediment yield estimates and 10Be erosion rates should be employed to predict the lifetimes of reservoirs.
Price, Stephen F; Payne, Antony J; Howat, Ian M; Smith, Benjamin E
2011-05-31
We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland's three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing.
Probable flood predictions in ungauged coastal basins of El Salvador
Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.
2008-01-01
A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.
Trends in Arctic Sea Ice Volume 2010-2013 from CryoSat-2
NASA Astrophysics Data System (ADS)
Tilling, R.; Ridout, A.; Wingham, D.; Shepherd, A.; Haas, C.; Farrell, S. L.; Schweiger, A. J.; Zhang, J.; Giles, K.; Laxon, S.
2013-12-01
Satellite records show a decline in Arctic sea ice extent over the past three decades with a record minimum in September 2012, and results from the Pan-Arctic Ice-Ocean Modelling and Assimilation System (PIOMAS) suggest that this has been accompanied by a reduction in volume. We use three years of measurements recorded by the European Space Agency CryoSat-2 (CS-2) mission, validated with in situ data, to generate estimates of seasonal variations and inter-annual trends in Arctic sea ice volume between 2010 and 2013. The CS-2 estimates of sea ice thickness agree with in situ estimates derived from upward looking sonar measurements of ice draught and airborne measurements of ice thickness and freeboard to within 0.1 metres. Prior to the record minimum in summer 2012, autumn and winter Arctic sea ice volume had fallen by ~1300 km3 relative to the previous year. Using the full 3-year period of CS-2 observations, we estimate that winter Arctic sea ice volume has decreased by ~700 km3/yr since 2010, approximately twice the average rate since 1980 as predicted by the PIOMAS.
Optimal lunar soft landing trajectories using taboo evolutionary programming
NASA Astrophysics Data System (ADS)
Mutyalarao, M.; Raj, M. Xavier James
A safe lunar landing is a key factor to undertake an effective lunar exploration. Lunar lander consists of four phases such as launch phase, the earth-moon transfer phase, circumlunar phase and landing phase. The landing phase can be either hard landing or soft landing. Hard landing means the vehicle lands under the influence of gravity without any deceleration measures. However, soft landing reduces the vertical velocity of the vehicle before landing. Therefore, for the safety of the astronauts as well as the vehicle lunar soft landing with an acceptable velocity is very much essential. So it is important to design the optimal lunar soft landing trajectory with minimum fuel consumption. Optimization of Lunar Soft landing is a complex optimal control problem. In this paper, an analysis related to lunar soft landing from a parking orbit around Moon has been carried out. A two-dimensional trajectory optimization problem is attempted. The problem is complex due to the presence of system constraints. To solve the time-history of control parameters, the problem is converted into two point boundary value problem by using the maximum principle of Pontrygen. Taboo Evolutionary Programming (TEP) technique is a stochastic method developed in recent years and successfully implemented in several fields of research. It combines the features of taboo search and single-point mutation evolutionary programming. Identifying the best unknown parameters of the problem under consideration is the central idea for many space trajectory optimization problems. The TEP technique is used in the present methodology for the best estimation of initial unknown parameters by minimizing objective function interms of fuel requirements. The optimal estimation subsequently results into an optimal trajectory design of a module for soft landing on the Moon from a lunar parking orbit. Numerical simulations demonstrate that the proposed approach is highly efficient and it reduces the minimum fuel consumption. The results are compared with the available results in literature shows that the solution of present algorithm is better than some of the existing algorithms. Keywords: soft landing, trajectory optimization, evolutionary programming, control parameters, Pontrygen principle.
Hydrological Retrospective of floods and droughts: Case study in the Amazon
NASA Astrophysics Data System (ADS)
Wongchuig Correa, Sly; Cauduro Dias de Paiva, Rodrigo; Carlo Espinoza Villar, Jhan; Collischonn, Walter
2017-04-01
Recent studies have reported an increase in intensity and frequency of hydrological extreme events in many regions of the Amazon basin over last decades, these events such as seasonal floods and droughts have originated a significant impact in human and natural systems. Recently, methodologies such as climatic reanalysis are being developed in order to create a coherent register of climatic systems, thus taking this notion, this research efforts to produce a methodology called Hydrological Retrospective (HR), that essentially simulate large rainfall datasets over hydrological models in order to develop a record over past hydrology, enabling the analysis of past floods and droughts. We developed our methodology on the Amazon basin, thus we used eight large precipitation datasets (more than 30 years) through a large scale hydrological and hydrodynamic model (MGB-IPH), after that HR products were validated against several in situ discharge gauges dispersed throughout Amazon basin, given focus in maximum and minimum events. For better HR results according performance metrics, we performed a forecast skill of HR to detect floods and droughts considering in-situ observations. Furthermore, statistical temporal series trend was performed for intensity of seasonal floods and drought in the whole Amazon basin. Results indicate that better HR represented well most past extreme events registered by in-situ observed data and also showed coherent with many events cited by literature, thus we consider viable to use some large precipitation datasets as climatic reanalysis mainly based on land surface component and datasets based in merged products for represent past regional hydrology and seasonal hydrological extreme events. On the other hand, an increase trend of intensity was realized for maximum annual discharges (related to floods) in north-western regions and for minimum annual discharges (related to drought) in central-south regions of the Amazon basin, these features were previously detected by other researches. In the whole basin, we estimated an upward trend of maximum annual discharges at Amazon River. In order to estimate better future hydrological behavior and their impacts on the society, HR could be used as a methodology to understand past extreme events occurrence in many places considering the global coverage of rainfall datasets.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
Generalizing boundaries for triangular designs, and efficacy estimation at extended follow-ups.
Allison, Annabel; Edwards, Tansy; Omollo, Raymond; Alves, Fabiana; Magirr, Dominic; E Alexander, Neal D
2015-11-16
Visceral leishmaniasis (VL) is a parasitic disease transmitted by sandflies and is fatal if left untreated. Phase II trials of new treatment regimens for VL are primarily carried out to evaluate safety and efficacy, while pharmacokinetic data are also important to inform future combination treatment regimens. The efficacy of VL treatments is evaluated at two time points, initial cure, when treatment is completed and definitive cure, commonly 6 months post end of treatment, to allow for slow response to treatment and detection of relapses. This paper investigates a generalization of the triangular design to impose a minimum sample size for pharmacokinetic or other analyses, and methods to estimate efficacy at extended follow-up accounting for the sequential design and changes in cure status during extended follow-up. We provided R functions that generalize the triangular design to impose a minimum sample size before allowing stopping for efficacy. For estimation of efficacy at a second, extended, follow-up time, the performance of a shrinkage estimator (SHE), a probability tree estimator (PTE) and the maximum likelihood estimator (MLE) for estimation was assessed by simulation. The SHE and PTE are viable approaches to estimate an extended follow-up although the SHE performed better than the PTE: the bias and root mean square error were lower and coverage probabilities higher. Generalization of the triangular design is simple to implement for adaptations to meet requirements for pharmacokinetic analyses. Using the simple MLE approach to estimate efficacy at extended follow-up will lead to biased results, generally over-estimating treatment success. The SHE is recommended in trials of two or more treatments. The PTE is an acceptable alternative for one-arm trials or where use of the SHE is not possible due to computational complexity. NCT01067443 , February 2010.
NASA Astrophysics Data System (ADS)
Won, An-Na; Song, Hae-Eun; Yang, Young-Kwon; Park, Jin-Chul; Hwang, Jung-Ha
2017-07-01
After the outbreak of the MERS (Middle East Respiratory Syndrome) epidemic, issues were raised regarding response capabilities of medical institutions, including the lack of isolation rooms at hospitals. Since then, the government of Korea has been revising regulations to enforce medical laws in order to expand the operation of isolation rooms and to strengthen standards regarding their mandatory installation at hospitals. Among general and tertiary hospitals in Korea, a total of 159 are estimated to be required to install isolation rooms to meet minimum standards. For the purpose of contributing to hospital construction plans in the future, this study conducted a questionnaire survey of experts and analysed the environment and devices necessary in isolation rooms, to determine their appropriate minimum size to treat patients. The result of the analysis is as follows: First, isolation rooms at hospitals are required to have a minimum 3,300mm minor axis and a minimum 5,000mm major axis for the isolation room itself, and a minimum 1,800mm minor axis for the antechamber where personal protective equipment is donned and removed. Second, the 15 ㎡-or-larger standard for the floor area of isolation rooms will have to be reviewed and standards for the minimum width of isolation rooms will have to be established.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutton, Spencer M.; Fisk, William J.
For a stand-alone retail building, a primary school, and a secondary school in each of the 16 California climate zones, the EnergyPlus building energy simulation model was used to estimate how minimum mechanical ventilation rates (VRs) affect energy use and indoor air concentrations of an indoor-generated contaminant. The modeling indicates large changes in heating energy use, but only moderate changes in total building energy use, as minimum VRs in the retail building are changed. For example, predicted state-wide heating energy consumption in the retail building decreases by more than 50% and total building energy consumption decreases by approximately 10% asmore » the minimum VR decreases from the Title 24 requirement to no mechanical ventilation. The primary and secondary schools have notably higher internal heat gains than in the retail building models, resulting in significantly reduced demand for heating. The school heating energy use was correspondingly less sensitive to changes in the minimum VR. The modeling indicates that minimum VRs influence HVAC energy and total energy use in schools by only a few percent. For both the retail building and the school buildings, minimum VRs substantially affected the predicted annual-average indoor concentrations of an indoor generated contaminant, with larger effects in schools. The shape of the curves relating contaminant concentrations with VRs illustrate the importance of avoiding particularly low VRs.« less
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
NASA Technical Reports Server (NTRS)
Pendley, Robert E; Robinson, Harold L
1950-01-01
An investigation of three NACA 1-series nose inlets, two of which were fitted with protruded central bodies, was conducted in the Langley 8-foot high-speed tunnel. An elliptical-nose body, which had a critical Mach number approximately equal to that of one of the nose inlets, was also tested. Tests were made near zero angle of attack for a Mach number range from 0.4 to 0.925 and for the supersonic Mach number of 1.2. The inlet-velocity-ratio range extended from zero to a maximum value of 1.34. Measurements included pressure distribution, external drag, and total-pressure loss of the internal flow near the inlet. Drag was not measured for the tests at the supersonic Mach number. Over the range of inlet-velocity ratio investigated, the calculated external pressure-drag coefficient at a Mach number of 1.2 was consecutively lower for the nose inlets of higher critical Mach number, and the pressure-drag coefficient of the longest nose inlet was in the range of pressure-drag coefficient for two solid noses of fineness ratio 2.4 and 6.0. For Mach numbers below the Mach number of the supercritical drag rise, extrapolation of the test data indicated that the external drag of the nose inlets was little affected by the addition of central bodies at or slightly below the minimum inlet-velocity ratio for unseparated central-body flow. The addition of central bodies to the nose inlets also led to no appreciable effects on either the Mach number of the supercritical drag rise, or, for inlet-velocity ratios high enough to avoid a pressure peak at the inlet lip, on the critical Mach number. The total-pressure recovery of the inlets tested, which were of a subsonic type, was sensibly unimpaired at the supersonic Mach number of 1.2 Low-speed measurements of the minimum inlet-velocity ratio for unseparated central-body flow appear to be applicable for Mach numbers extending to 1.2.
Theis, Daniel; Ivanic, Joseph; Windus, Theresa L.; ...
2016-03-10
The metastable ring structure of the ozone 1 1A 1 ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two 1A 1 states. In the present work, valence correlated energies of the 1 1A 1 state and the 2 1A 1 state were calculated at the 1 1A 1 open minimum, the 1 1A 1 ring minimum, themore » transition state between these two minima, the minimum of the 2 1A 1 state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of CorrelationEnergy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 11A1 state, the present calculations yield the estimates of (ring minimum—open minimum) ~45–50 mh and (transition state—open minimum) ~85–90 mh. For the (2 1A 1– 1A 1) excitation energy, the estimate of ~130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2 1A 1– 1A 1) is found to be between 1 and 10 mh. The geometry of the transition state on the 11A1 surface and that of the minimum on the 2 1A 1 surface nearly coincide. More accurate predictions of the energydifferences also require CI expansions to at least sextuple excitations with respect to the valence space. Furthermore, for every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theis, Daniel; Ivanic, Joseph; Windus, Theresa L.
The metastable ring structure of the ozone 1 1A 1 ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two 1A 1 states. In the present work, valence correlated energies of the 1 1A 1 state and the 2 1A 1 state were calculated at the 1 1A 1 open minimum, the 1 1A 1 ring minimum, themore » transition state between these two minima, the minimum of the 2 1A 1 state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of CorrelationEnergy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 11A1 state, the present calculations yield the estimates of (ring minimum—open minimum) ~45–50 mh and (transition state—open minimum) ~85–90 mh. For the (2 1A 1– 1A 1) excitation energy, the estimate of ~130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2 1A 1– 1A 1) is found to be between 1 and 10 mh. The geometry of the transition state on the 11A1 surface and that of the minimum on the 2 1A 1 surface nearly coincide. More accurate predictions of the energydifferences also require CI expansions to at least sextuple excitations with respect to the valence space. Furthermore, for every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less
Scanning-slit topography in patients with keratoconus.
Módis, László; Németh, Gábor; Szalai, Eszter; Flaskó, Zsuzsa; Seitz, Berthold
2017-01-01
To evaluate the anterior and posterior corneal surfaces using scanning-slit topography and to determine the diagnostic ability of the measured corneal parameters in keratoconus. Orbscan II measurements were taken in 39 keratoconic corneas previously diagnosed by corneal topography and in 39 healthy eyes. The central minimum, maximum, and astigmatic simulated keratometry (K) and anterior axial power values were determined. Spherical and cylindrical mean power diopters were obtained at the central and at the steepest point of the cornea both on anterior and on posterior mean power maps. Pachymetry evaluations were taken at the center and paracentrally in the 3 mm zone from the center at a location of every 45 degrees. Receiver operating characteristic (ROC) analysis was used to determine the best cut-off values and to evaluate the utility of the measured parameters in identifying patients with keratoconus. The minimum, maximum and astigmatic simulated K readings were 44.80±3.06 D, 47.17±3.67 D and 2.42±1.84 D respectively in keratoconus patients and these values differed significantly ( P <0.0001 for all comparisons) from healthy subjects. For all pachymetry measurements and for anterior and posterior mean power values significant differences were found between the two groups. Moreover, anterior central cylindrical power had the best discrimination ability (area under the ROC curve=0.948). The results suggest that scanning-slit topography and pachymetry are accurate methods both for keratoconus screening and for confirmation of the diagnosis.
Dielectron production in Au + Au collisions at √{sN N}=200 GeV
NASA Astrophysics Data System (ADS)
Adare, A.; Aidala, C.; Ajitanand, N. N.; Akiba, Y.; Akimoto, R.; Alexander, J.; Alfred, M.; Al-Ta'Ani, H.; Angerami, A.; Aoki, K.; Apadula, N.; Aramaki, Y.; Asano, H.; Aschenauer, E. C.; Atomssa, E. T.; Averbeck, R.; Awes, T. C.; Azmoun, B.; Babintsev, V.; Bai, M.; Bandara, N. S.; Bannier, B.; Barish, K. N.; Bassalleck, B.; Bathe, S.; Baublis, V.; Baumgart, S.; Bazilevsky, A.; Beaumier, M.; Beckman, S.; Belmont, R.; Berdnikov, A.; Berdnikov, Y.; Blau, D. S.; Bok, J. S.; Boyle, K.; Brooks, M. L.; Bryslawskyj, J.; Buesching, H.; Bumazhnov, V.; Butsyk, S.; Campbell, S.; Castera, P.; Chen, C.-H.; Chi, C. Y.; Chiu, M.; Choi, I. J.; Choi, J. B.; Choi, S.; Choudhury, R. K.; Christiansen, P.; Chujo, T.; Chvala, O.; Cianciolo, V.; Citron, Z.; Cole, B. A.; Connors, M.; Csanád, M.; Csörgő, T.; Dairaku, S.; Danley, T. W.; Datta, A.; Daugherity, M. S.; David, G.; Deblasio, K.; Dehmelt, K.; Denisov, A.; Deshpande, A.; Desmond, E. J.; Dharmawardane, K. V.; Dietzsch, O.; Ding, L.; Dion, A.; Diss, P. B.; Do, J. H.; Donadelli, M.; D'Orazio, L.; Drapier, O.; Drees, A.; Drees, K. A.; Durham, J. M.; Durum, A.; Edwards, S.; Efremenko, Y. V.; Engelmore, T.; Enokizono, A.; Esumi, S.; Eyser, K. O.; Fadem, B.; Feege, N.; Fields, D. E.; Finger, M.; Finger, M.; Fleuret, F.; Fokin, S. L.; Frantz, J. E.; Franz, A.; Frawley, A. D.; Fukao, Y.; Fusayasu, T.; Gainey, K.; Gal, C.; Gallus, P.; Garg, P.; Garishvili, A.; Garishvili, I.; Ge, H.; Giordano, F.; Glenn, A.; Gong, X.; Gonin, M.; Goto, Y.; Granier de Cassagnac, R.; Grau, N.; Greene, S. V.; Grosse Perdekamp, M.; Gunji, T.; Guo, L.; Gustafsson, H.-Å.; Hachiya, T.; Haggerty, J. S.; Hahn, K. I.; Hamagaki, H.; Hamilton, H. F.; Han, S. Y.; Hanks, J.; Hasegawa, S.; Haseler, T. O. S.; Hashimoto, K.; Haslum, E.; Hayano, R.; He, X.; Hemmick, T. K.; Hester, T.; Hill, J. C.; Hollis, R. S.; Homma, K.; Hong, B.; Horaguchi, T.; Hori, Y.; Hoshino, T.; Hotvedt, N.; Huang, J.; Huang, S.; Ichihara, T.; Iinuma, H.; Ikeda, Y.; Imai, K.; Imrek, J.; Inaba, M.; Iordanova, A.; Isenhower, D.; Issah, M.; Ivanishchev, D.; Jacak, B. V.; Javani, M.; Jezghani, M.; Jia, J.; Jiang, X.; Johnson, B. M.; Joo, K. S.; Jouan, D.; Jumper, D. S.; Kamin, J.; Kanda, S.; Kaneti, S.; Kang, B. H.; Kang, J. H.; Kang, J. S.; Kapustinsky, J.; Karatsu, K.; Kasai, M.; Kawall, D.; Kazantsev, A. V.; Kempel, T.; Key, J. A.; Khachatryan, V.; Khanzadeev, A.; Kijima, K. M.; Kim, B. I.; Kim, C.; Kim, D. J.; Kim, E.-J.; Kim, G. W.; Kim, H. J.; Kim, K.-B.; Kim, M.; Kim, Y.-J.; Kim, Y. K.; Kimelman, B.; Kinney, E.; Kiss, Á.; Kistenev, E.; Kitamura, R.; Klatsky, J.; Kleinjan, D.; Kline, P.; Koblesky, T.; Komatsu, Y.; Komkov, B.; Koster, J.; Kotchetkov, D.; Kotov, D.; Král, A.; Krizek, F.; Kunde, G. J.; Kurita, K.; Kurosawa, M.; Kwon, Y.; Kyle, G. S.; Lacey, R.; Lai, Y. S.; Lajoie, J. G.; Lebedev, A.; Lee, B.; Lee, D. M.; Lee, J.; Lee, K. B.; Lee, K. S.; Lee, S.; Lee, S. H.; Lee, S. R.; Leitch, M. J.; Leite, M. A. L.; Leitgab, M.; Lewis, B.; Li, X.; Lim, S. H.; Linden Levy, L. A.; Liu, M. X.; Love, B.; Lynch, D.; Maguire, C. F.; Makdisi, Y. I.; Makek, M.; Manion, A.; Manko, V. I.; Mannel, E.; Masumoto, S.; McCumber, M.; McGaughey, P. L.; McGlinchey, D.; McKinney, C.; Meles, A.; Mendoza, M.; Meredith, B.; Miake, Y.; Mibe, T.; Mignerey, A. C.; Milov, A.; Mishra, D. K.; Mitchell, J. T.; Miyachi, Y.; Miyasaka, S.; Mizuno, S.; Mohanty, A. K.; Mohapatra, S.; Montuenga, P.; Moon, H. J.; Moon, T.; Morrison, D. P.; Motschwiller, S.; Moukhanova, T. V.; Murakami, T.; Murata, J.; Mwai, A.; Nagae, T.; Nagamiya, S.; Nagashima, K.; Nagle, J. L.; Nagy, M. I.; Nakagawa, I.; Nakagomi, H.; Nakamiya, Y.; Nakamura, K. R.; Nakamura, T.; Nakano, K.; Nattrass, C.; Nederlof, A.; Netrakanti, P. K.; Nihashi, M.; Niida, T.; Nishimura, S.; Nouicer, R.; Novák, T.; Novitzky, N.; Nyanin, A. S.; O'Brien, E.; Ogilvie, C. A.; Okada, K.; Orjuela Koop, J. D.; Osborn, J. D.; Oskarsson, A.; Ouchida, M.; Ozawa, K.; Pak, R.; Pantuev, V.; Papavassiliou, V.; Park, B. H.; Park, I. H.; Park, J. S.; Park, S.; Park, S. K.; Pate, S. F.; Patel, L.; Patel, M.; Pei, H.; Peng, J.-C.; Pereira, H.; Perepelitsa, D. V.; Perera, G. D. N.; Peressounko, D. Yu.; Perry, J.; Petti, R.; Pinkenburg, C.; Pinson, R.; Pisani, R. P.; Proissl, M.; Purschke, M. L.; Qu, H.; Rak, J.; Ramson, B. J.; Ravinovich, I.; Read, K. F.; Reynolds, D.; Riabov, V.; Riabov, Y.; Richardson, E.; Rinn, T.; Roach, D.; Roche, G.; Rolnick, S. D.; Rosati, M.; Rowan, Z.; Rubin, J. G.; Sahlmueller, B.; Saito, N.; Sakaguchi, T.; Sako, H.; Samsonov, V.; Sano, M.; Sarsour, M.; Sato, S.; Sawada, S.; Schaefer, B.; Schmoll, B. K.; Sedgwick, K.; Seidl, R.; Sen, A.; Seto, R.; Sett, P.; Sexton, A.; Sharma, D.; Shein, I.; Shibata, T.-A.; Shigaki, K.; Shimomura, M.; Shoji, K.; Shukla, P.; Sickles, A.; Silva, C. L.; Silvermyr, D.; Sim, K. S.; Singh, B. K.; Singh, C. P.; Singh, V.; Slunečka, M.; Snowball, M.; Soltz, R. A.; Sondheim, W. E.; Sorensen, S. P.; Sourikova, I. V.; Stankus, P. W.; Stenlund, E.; Stepanov, M.; Ster, A.; Stoll, S. P.; Sugitate, T.; Sukhanov, A.; Sumita, T.; Sun, J.; Sziklai, J.; Takagui, E. M.; Takahara, A.; Taketani, A.; Tanaka, Y.; Taneja, S.; Tanida, K.; Tannenbaum, M. J.; Tarafdar, S.; Taranenko, A.; Tennant, E.; Themann, H.; Tieulent, R.; Timilsina, A.; Todoroki, T.; Tomášek, L.; Tomášek, M.; Torii, H.; Towell, C. L.; Towell, R.; Towell, R. S.; Tserruya, I.; Tsuchimoto, Y.; Tsuji, T.; Vale, C.; van Hecke, H. W.; Vargyas, M.; Vazquez-Zambrano, E.; Veicht, A.; Velkovska, J.; Vértesi, R.; Virius, M.; Vossen, A.; Vrba, V.; Vznuzdaev, E.; Wang, X. R.; Watanabe, D.; Watanabe, K.; Watanabe, Y.; Watanabe, Y. S.; Wei, F.; Wei, R.; White, A. S.; White, S. N.; Winter, D.; Wolin, S.; Woody, C. L.; Wysocki, M.; Xia, B.; Xue, L.; Yalcin, S.; Yamaguchi, Y. L.; Yang, R.; Yanovich, A.; Ying, J.; Yokkaichi, S.; Yoo, J. H.; Yoon, I.; You, Z.; Younus, I.; Yu, H.; Yushmanov, I. E.; Zajc, W. A.; Zelenski, A.; Zhou, S.; Zou, L.; Phenix Collaboration
2016-01-01
We present measurements of e+e- production at midrapidity in Au +Au collisions at √{sNN}=200 GeV. The invariant yield is studied within the PHENIX detector acceptance over a wide range of mass (me e<5 GeV /c2) and pair transverse momentum (pT<5 GeV /c ) for minimum bias and for five centrality classes. The e+e- yield is compared to the expectations from known sources. In the low-mass region (me e=0.30 - 0.76 GeV /c2 ) there is an enhancement that increases with centrality and is distributed over the entire pair pT range measured. It is significantly smaller than previously reported by the PHENIX experiment and amounts to 2.3 ±0.4 (stat )±0.4 (syst )±0.2 (model ) or to 1.7 ±0.3 (stat )±0.3 (syst )±0.2 (model ) for minimum bias collisions when the open heavy-flavor contribution is calculated with pythia or mc@nlo, respectively. The inclusive mass and pT distributions, as well as the centrality dependence, are well reproduced by model calculations where the enhancement mainly originates from the melting of the ρ meson resonance as the system approaches chiral symmetry restoration. In the intermediate-mass region (me e=1.2 - 2.8 GeV /c2 ), the data hint at a significant contribution in addition to the yield from the semileptonic decays of heavy-flavor mesons.
NASA Astrophysics Data System (ADS)
Whalley, L. K.; Stone, D.; Bandy, B.; Dunmore, R.; Hamilton, J. F.; Hopkins, J.; Lee, J. D.; Lewis, A. C.; Heard, D. E.
2015-11-01
Near-continuous measurements of OH reactivity in the urban background atmosphere of central London during the summer of 2012 are presented. OH reactivity behaviour is seen to be broadly dependent on airmass origin with the highest reactivity and the most pronounced diurnal profile observed when air had passed over central London to the East, prior to measurement. Averaged over the entire observation period of 26 days, OH reactivity peaked at ~ 27 s-1 in the morning with a minimum of ~ 15 s-1 during the afternoon. A maximum OH reactivity of 116 s-1 was recorded on one day during morning rush hour. A detailed box model using the Master Chemical Mechanism was used to calculate OH reactivity, and was constrained with an extended measurement dataset of volatile organic compounds (VOCs) derived from GC-FID and a two-dimensional GC instrument which included heavier molecular weight (up to C12) aliphatic VOCs, oxygenated VOCs and the biogenic VOCs of α pinene and limonene. Comparison was made between observed OH reactivity and modelled OH reactivity using (i) a standard suite of VOC measurements (C2-C8 hydrocarbons and a small selection of oxygenated VOCs) and (ii) a more comprehensive inventory including species up to C12. Modelled reactivities were lower than those measured (by 33 %) when only the reactivity of the standard VOC suite was considered. The difference between measured and modelled reactivity was improved, to within 15 %, if the reactivity of the higher VOCs (≥ C9) was also considered, with the reactivity of the biogenic compounds of α pinene and limonene and their oxidation products almost entirely responsible for this improvement. Further improvements in the model's ability to reproduce OH reactivity (to within 6 %) could be achieved if the reactivity and degradation mechanism of unassigned two-dimensional GC peaks were estimated. Neglecting the contribution of the higher VOCs (≥ C9) (particularly α pinene and limonene) and model-generated intermediates worsened the agreement between modelled and observed OH concentrations (by 41 %) and the magnitude of in situ ozone production calculated from the production of RO2 was significantly lower (60 %). This work highlights that any future ozone abatement strategies should consider the role that biogenic emissions play alongside anthropogenic emissions in influencing London's air quality.
NASA Astrophysics Data System (ADS)
Whalley, Lisa K.; Stone, Daniel; Bandy, Brian; Dunmore, Rachel; Hamilton, Jacqueline F.; Hopkins, James; Lee, James D.; Lewis, Alastair C.; Heard, Dwayne E.
2016-02-01
Near-continuous measurements of hydroxyl radical (OH) reactivity in the urban background atmosphere of central London during the summer of 2012 are presented. OH reactivity behaviour is seen to be broadly dependent on air mass origin, with the highest reactivity and the most pronounced diurnal profile observed when air had passed over central London to the east, prior to measurement. Averaged over the entire observation period of 26 days, OH reactivity peaked at ˜ 27 s-1 in the morning, with a minimum of ˜ 15 s-1 during the afternoon. A maximum OH reactivity of 116 s-1 was recorded on one day during morning rush hour. A detailed box model using the Master Chemical Mechanism was used to calculate OH reactivity, and was constrained with an extended measurement data set of volatile organic compounds (VOCs) derived from a gas chromatography flame ionisation detector (GC-FID) and a two-dimensional GC instrument which included heavier molecular weight (up to C12) aliphatic VOCs, oxygenated VOCs and the biogenic VOCs α-pinene and limonene. Comparison was made between observed OH reactivity and modelled OH reactivity using (i) a standard suite of VOC measurements (C2-C8 hydrocarbons and a small selection of oxygenated VOCs) and (ii) a more comprehensive inventory including species up to C12. Modelled reactivities were lower than those measured (by 33 %) when only the reactivity of the standard VOC suite was considered. The difference between measured and modelled reactivity was improved, to within 15 %, if the reactivity of the higher VOCs (⩾ C9) was also considered, with the reactivity of the biogenic compounds of α-pinene and limonene and their oxidation products almost entirely responsible for this improvement. Further improvements in the model's ability to reproduce OH reactivity (to within 6 %) could be achieved if the reactivity and degradation mechanism of unassigned two-dimensional GC peaks were estimated. Neglecting the contribution of the higher VOCs (⩾ C9) (particularly α-pinene and limonene) and model-generated intermediates increases the modelled OH concentrations by 41 %, and the magnitude of in situ ozone production calculated from the production of RO2 was significantly lower (60 %). This work highlights that any future ozone abatement strategies should consider the role that biogenic emissions play alongside anthropogenic emissions in influencing London's air quality.
The energy requirements of an aircraft triggered discharge
NASA Astrophysics Data System (ADS)
Bicknell, J. A.; Shelton, R. W.
The corona produced at aircraft surfaces requires an energy input before the corona can develop into a high current discharge and, thus, a possible lightning stroke. This energy must be drawn from the space charge field of the thundercloud and, since this is of low density, the unique propagation characteristics of positive corona streamers may be important. Estimates of the energy made available by the propagation are compared with laboratory measurements of the minimum energy input required to trigger a breakdown. The comparison indicates a minimum streamer range for breakdown of several tens of meters. Also estimated is the energy released as a consequence of streamer-hydrometer interactions; this is shown to be significant so that breakdown could depend upon the precipitation rate within the cloud. Inhibiting streamer production may therefore provide an aircraft with a degree of corona protection.
Estimating Fluctuating Pressures From Distorted Measurements
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Leondes, Cornelius T.
1994-01-01
Two algorithms extract estimates of time-dependent input (upstream) pressures from outputs of pressure sensors located at downstream ends of pneumatic tubes. Effect deconvolutions that account for distoring effects of tube upon pressure signal. Distortion of pressure measurements by pneumatic tubes also discussed in "Distortion of Pressure Signals in Pneumatic Tubes," (ARC-12868). Varying input pressure estimated from measured time-varying output pressure by one of two deconvolution algorithms that take account of measurement noise. Algorithms based on minimum-covariance (Kalman filtering) theory.
Cutting a young-growth, mixed-conifer stand to California Forest Practice Act Standards
Philip M. McDonald
1973-01-01
Cutting by the minimum standard of the Rules of California's North Sierra Pine Forest District was evaluated for effects on species composition, seed fall, regeneration, and residual growth at the Challenge Experimental Forest, central California. Cutting removed 74 percent of the stand basal area and 94 percent of the merchantable volume. The heavy cut changed...
A call for standardized naming and reporting of human ESC and iPSC lines.
Luong, Mai X; Auerbach, Jonathan; Crook, Jeremy M; Daheron, Laurence; Hei, Derek; Lomax, Geoffrey; Loring, Jeanne F; Ludwig, Tenneille; Schlaeger, Thorsten M; Smith, Kelly P; Stacey, Glyn; Xu, Ren-He; Zeng, Fanyi
2011-04-08
Human embryonic and induced pluripotent stem cell lines are being generated at a rapid pace and now number in the thousands. We propose a standard nomenclature and suggest the use of a centralized database for all cell line names and a minimum set of information for reporting new derivations. Copyright © 2011 Elsevier Inc. All rights reserved.
Simplified power control method for cellular mobile communication
NASA Astrophysics Data System (ADS)
Leung, Y. W.
1994-04-01
The centralized power control (CPC) method measures the gain of the communication links between every mobile and every base station in the cochannel cells and determines optimal transmitter power to maximize the minimum carrier-to-interference ratio. The authors propose a simplified power control method which has nearly the same performance as the CPC method but which involves much smaller measurement overhead.
NASA Astrophysics Data System (ADS)
Fienen, M. N.; Bradbury, K. R.; Kniffin, M.; Barlow, P. M.; Krause, J.; Westenbroek, S.; Leaf, A.
2015-12-01
The well-drained sandy soil in the Wisconsin Central Sands is ideal for growing potatoes, corn, and other vegetables. A shallow sand and gravel aquifer provides abundant water for agricultural irrigation but also supplies critical base flow to cold-water trout streams. These needs compete with one another, and stakeholders from various perspectives are collaborating to seek solutions. Stakeholders were engaged in providing and verifying data to guide construction of a groundwater flow model which was used with linear and sequential linear programming to evaluate optimal tradeoffs between agricultural pumping and ecologically based minimum base flow values. The connection between individual irrigation wells as well as industrial and municipal supply and streamflow depletion can be evaluated using the model. Rather than addressing 1000s of wells individually, a variety of well management groups were established through k-means clustering. These groups are based on location, potential impact, water-use categories, depletion potential, and other factors. Through optimization, pumping rates were reduced to attain mandated minimum base flows. This formalization enables exploration of possible solutions for the stakeholders, and provides a tool which is transparent and forms a basis for discussion and negotiation.
NASA Technical Reports Server (NTRS)
Curtis, Scott; Starr, David OC. (Technical Monitor)
2002-01-01
The summer climate of southern Mexico and Central America is characterized by a mid summer drought (MSD), where rainfall is reduced by 40% in July as compared to June and September. A mid-summer reduction in the climatological number of eastern Pacific tropical cyclones has also been noted. Little is understood about the climatology and interannual variability of these minima. The present study uses a novel approach to quantify the bimodal distribution of summertime rainfall for the globe and finds that this feature of the annual cycle is most extreme over Pan America and adjacent oceans. One dominant interannual signal in this region occurs the summer before a strong winter El Nino/Southern Oscillation ENSO. Before El Nino events the region is dry, the MSD is strong and centered over the ocean, and the mid-summer minimum in tropical cyclone frequency is most pronounced. This is significantly different from Neutral cases (non-El Nino and non-La Nina) when the MSD is weak and positioned over the land bridge. The MSD is highly variable for La Nina years, and there is not an obvious mid-summer minimum in the number of tropical cyclones.
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.
Forest statistics for Central Alabama counties
Arnold Hedlund; J.M. Earles
1972-01-01
This report tabulates information from a new forest inventory of counties in central Alabama. The tables are intended for use as source data in compiling estimates for groups of counties. Because the sampling procedure is designed primarily to furnish inventory data for the State as a whole, estimates for individual counties have limited and variable accuracy.
NASA Astrophysics Data System (ADS)
Kita, Y.; Waseda, T.
2016-12-01
Explosive cyclones (EXPCs) were investigated in three recent reanalyses. Their tracking methods is diverse among researchers, and additionally reanalysis data they use are various. Reanalysis data are essential as initial conditions to implement a downscale simulation with high accuracy. In this study, characteristics of EXPCs in three recent reanalyses were investigated from several perspectives: track densities, minimum MSLP (Mean Sea Level Pressure), and radius of EXPCs. The tracking method of extratropical cyclones (ECs) is to track local minimum of MSLP. The domain is limited to Eastern Asia and the North Pacific Ocean (lat20°:70°, lon100°:200°), and target period is 2000-2014. Fig.1 shows that the frequencies of EXPCs, which is defined as ECs whose MSLP drops by over 12hPa in 12hours, are greatly different, noting that extracted EXPCs are those whose most deepening phases were located around Japan (lat20°:60°, lon110°:160°). In addition, they are dissimilar to those in a previous EXPCs database (Kawamura et al.) and results in weather map analyses. The differences between each frequency might be caused by MSLP at their centers: there were sometimes small gaps of a few hPa. The minimum MSLP and effective radius were also investigated, but distributions of effective radii of EXPCs did not show significant difference (Fig.2). Thus, the gaps of central MSLP just matter in the differences of their trends. To evaluate the path density of EXPCs, two-dimensional kernel density estimation was conducted. The kernel densities of EXPCs' tracks in three reanalyses seem similar: they accumulated apparently above ocean (not shown). Two-dimensional kernel densities of EXPCs' most deepening points accumulated above Sea of Japan, Kuroshio and Extension. Therefore, it is proved that there are considerable differences in numbers of EXPCs depending on reanalyses, while the general characteristics of EXPCs just have little difference. It is worthwhile to say that careful attention should be paid when researchers investigate an individual EXPC with reanalysis data.
Velez-Zuazo, Ximena; Quiñones, Javier; Pacheco, Aldo S; Klinge, Luciana; Paredes, Evelyn; Quispe, Sixto; Kelez, Shaleyla
2014-01-01
In order to enhance protection and conservation strategies for endangered green turtles (Chelonia mydas), the identification of neritic habitats where this species aggregates is mandatory. Herein, we present new information about the population parameters and residence time of two neritic aggregations from 2010 to 2013; one in an upwelling dominated site (Paracas ∼14°S) and the other in an ecotone zone from upwelling to warm equatorial conditions (El Ñuro ∼4°S) in the Southeast Pacific. We predicted proportionally more adult individuals would occur in the ecotone site; whereas in the site dominated by an upwelling juvenile individuals would predominate. At El Ñuro, the population was composed by (15.3%) of juveniles, (74.9%) sub-adults, and (9.8%) adults, with an adult sex ratio of 1.16 males per female. Times of residence in the area ranged between a minimum of 121 and a maximum of 1015 days (mean 331.1 days). At Paracas the population was composed by (72%) of juveniles and (28%) sub-adults, no adults were recorded, thus supporting the development habitat hypothesis stating that throughout the neritic distribution there are sites exclusively occupied by juveniles. Residence time ranged between a minimum of 65 days and a maximum of 680 days (mean 236.1). High growth rates and body condition index values were estimated suggesting healthy individuals at both study sites. The population traits recorded at both sites suggested that conditions found in Peruvian neritic waters may contribute to the recovery of South Pacific green turtles. However, both aggregations are still at jeopardy due to pollution, bycatch and illegal catch and thus require immediate enforcing of conservation measurements.
Velez-Zuazo, Ximena; Quiñones, Javier; Pacheco, Aldo S.; Klinge, Luciana; Paredes, Evelyn; Quispe, Sixto; Kelez, Shaleyla
2014-01-01
In order to enhance protection and conservation strategies for endangered green turtles (Chelonia mydas), the identification of neritic habitats where this species aggregates is mandatory. Herein, we present new information about the population parameters and residence time of two neritic aggregations from 2010 to 2013; one in an upwelling dominated site (Paracas ∼14°S) and the other in an ecotone zone from upwelling to warm equatorial conditions (El Ñuro ∼4°S) in the Southeast Pacific. We predicted proportionally more adult individuals would occur in the ecotone site; whereas in the site dominated by an upwelling juvenile individuals would predominate. At El Ñuro, the population was composed by (15.3%) of juveniles, (74.9%) sub-adults, and (9.8%) adults, with an adult sex ratio of 1.16 males per female. Times of residence in the area ranged between a minimum of 121 and a maximum of 1015 days (mean 331.1 days). At Paracas the population was composed by (72%) of juveniles and (28%) sub-adults, no adults were recorded, thus supporting the development habitat hypothesis stating that throughout the neritic distribution there are sites exclusively occupied by juveniles. Residence time ranged between a minimum of 65 days and a maximum of 680 days (mean 236.1). High growth rates and body condition index values were estimated suggesting healthy individuals at both study sites. The population traits recorded at both sites suggested that conditions found in Peruvian neritic waters may contribute to the recovery of South Pacific green turtles. However, both aggregations are still at jeopardy due to pollution, bycatch and illegal catch and thus require immediate enforcing of conservation measurements. PMID:25409240
Different Patterns of the Urban Heat Island Intensity from Cluster Analysis
NASA Astrophysics Data System (ADS)
Silva, F. B.; Longo, K.
2014-12-01
This study analyzes the different variability patterns of the Urban Heat Island intensity (UHII) in the Metropolitan Area of Rio de Janeiro (MARJ), one of the largest urban agglomerations in Brazil. The UHII is defined as the difference in the surface air temperature between the urban/suburban and rural/vegetated areas. To choose one or more stations that represent those areas we used the technique of cluster analysis on the air temperature observations from 14 surface weather stations in the MARJ. The cluster analysis aims to classify objects based on their characteristics, gathering similar groups. The results show homogeneity patterns between air temperature observations, with 6 homogeneous groups being defined. Among those groups, one might be a natural choice for the representative urban area (Central station); one corresponds to suburban area (Afonsos station); and another group referred as rural area is compound of three stations (Ecologia, Santa Cruz and Xerém) that are located in vegetated regions. The arithmetic mean of temperature from the three rural stations is taken to represent the rural station temperature. The UHII is determined from these homogeneous groups. The first UHII is estimated from urban and rural temperature areas (Case 1), whilst the second UHII is obtained from suburban and rural temperature areas (Case 2). In Case 1, the maximum UHII occurs in two periods, one in the early morning and the other at night, while the minimum UHII occurs in the afternoon. In Case 2, the maximum UHII is observed during afternoon/night and the minimum during dawn/early morning. This study demonstrates that the stations choice reflects different UHII patterns, evidencing that distinct behaviors of this phenomenon can be identified.
Variability of temperature properties over Kenya based on observed and reanalyzed datasets
NASA Astrophysics Data System (ADS)
Ongoma, Victor; Chen, Haishan; Gao, Chujie; Sagero, Phillip Obaigwa
2017-08-01
Updated information on trends of climate extremes is central in the assessment of climate change impacts. This work examines the trends in mean, diurnal temperature range (DTR), maximum and minimum temperatures, 1951-2012 and the recent (1981-2010) extreme temperature events over Kenya. The study utilized daily observed and reanalyzed monthly mean, minimum, and maximum temperature datasets. The analysis was carried out based on a set of nine indices recommended by the Expert Team on Climate Change Detection and Indices (ETCCDI). The trend of the mean and the extreme temperature was determined using Mann-Kendall rank test, linear regression analysis, and Sen's slope estimator. December-February (DJF) season records high temperature while June-August (JJA) experiences the least temperature. The observed rate of warming is + 0.15 °C/decade. However, DTR does not show notable annual trend. Both seasons show an overall warming trend since the early 1970s with abrupt and significant changes happening around the early 1990s. The warming is more significant in the highland regions as compared to their lowland counterparts. There is increase variance in temperature. The percentage of warm days and warm nights is observed to increase, a further affirmation of warming. This work is a synoptic scale study that exemplifies how seasonal and decadal analyses, together with the annual assessments, are important in the understanding of the temperature variability which is vital in vulnerability and adaptation studies at a local/regional scale. However, following the quality of observed data used herein, there remains need for further studies on the subject using longer and more data to avoid generalizations made in this study.
Similar microearthquakes observed in western Nagano, Japan, and implications for rupture mechanics
NASA Astrophysics Data System (ADS)
Cheng, Xin; Niu, Fenglin; Silver, Paul G.; Horiuchi, Shigeki; Takai, Kaori; Iio, Yoshihisa; Ito, Hisao
2007-04-01
We have applied a waveform cross correlation technique to study the similarity and the repeatability of more than 21,000 microearthquakes (0 < M < 4.5) in the aftershock zone of the 1984 western Nagano earthquake in central Japan. We find that the seismicity in this particular intraplate fault essentially consists of no repeating earthquakes that occurred on the same patch of the fault in a quasiperiodic manner in the study period between 1995 and 2001. On the other hand, we identify a total of 278 doublets and 62 multiplets (807 events) that occurred consecutively within seconds to days. On the basis of the relative arrival times of the P and S waves, we have obtained precise relative locations of these consecutive events with an error between several meters to a few tens of meters. There is a clear lower bound on the distances measured between these consecutive events and the lower bound appears to be proportional to the size of the first events. This feature is consistent with what Rubin and Gillard [2000] have observed near the San Juan Bautista section of the San Andreas Fault. Shear stress increases at the edge of an earthquake rupture, and the rupture edge becomes the most likely place where the second events are initiated. The observed minimum distance thus reflects the rupture size of the first events. The minimum distance corresponds to the rupture size calculated from a circular fault model with a stress drop of 10 MPa. We found that using different time windows results in a slight difference in the delay time estimates and the subsequent projection locations, which may reflect the finite size nature of earthquake ruptures.
Channel estimation based on quantized MMP for FDD massive MIMO downlink
NASA Astrophysics Data System (ADS)
Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie
2016-10-01
In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.
NASA Astrophysics Data System (ADS)
Miller, M. P.; Tesoriero, A. J.; Hood, K.; Terziotti, S.; Wolock, D.
2017-12-01
The myriad hydrologic and biogeochemical processes taking place in watersheds occurring across space and time are integrated and reflected in the quantity and quality of water in streams and rivers. Collection of high-frequency water quality data with sensors in surface waters provides new opportunities to disentangle these processes and quantify sources and transport of water and solutes in the coupled groundwater-surface water system. A new approach for separating the streamflow hydrograph into three components was developed and coupled with high-frequency specific conductance and nitrate data to estimate time-variable watershed-scale nitrate loading from three end-member pathways - dilute quickflow, concentrated quickflow, and slowflow groundwater - to two streams in central Wisconsin. Time-variable nitrate loads from the three pathways were estimated for periods of up to two years in a groundwater-dominated and a quickflow-dominated stream, using only streamflow and in-stream water quality data. The dilute and concentrated quickflow end-members were distinguished using high-frequency specific conductance data. Results indicate that dilute quickflow contributed less than 5% of the nitrate load at both sites, whereas 89±5% of the nitrate load at the groundwater-dominated stream was from slowflow groundwater, and 84±13% of the nitrate load at the quickflow-dominated stream was from concentrated quickflow. Concentrated quickflow nitrate concentrations varied seasonally at both sites, with peak concentrations in the winter that were 2-3 times greater than minimum concentrations during the growing season. Application of this approach provides an opportunity to assess stream vulnerability to non-point source nitrate loading and expected stream responses to current or changing conditions and practices in watersheds.
A study of the additional costs of dispensing workers' compensation prescriptions.
Schafermeyer, Kenneth W
2007-03-01
Although there is a significant amount of additional work involved in dispensing workers' compensation prescriptions, these costs have not been quantified. A study of the additional costs to dispense a workers' compensation prescription is needed to measure actual costs and to help determine the reasonableness of reimbursement for prescriptions dispensed under workers' compensation programs. The purpose of this study was to determine the minimum additional time and costs required to dispense workers' compensation prescriptions in Texas. A convenience sample of 30 store-level pharmacy staff members involved in submitting and processing prescription claims for the Texas Mutual workers' compensation program were interviewed by telephone. Data collected to determine the additional costs of dispensing a workers' compensation prescription included (1) the amount of additional time and personnel costs required to dispense and process an average workers' compensation prescription claim, (2) the difference in time required for a new versus a refilled prescription, (3) overhead costs for processing workers' compensation prescription claims by experienced experts at a central processing facility, (4) carrying costs for workers' compensation accounts receivable, and (5) bad debts due to uncollectible workers' compensation claims. The median of the sample pharmacies' additional costs for dispensing a workers' compensation prescription was estimated to be at least $9.86 greater than for a cash prescription. This study shows that the estimated costs for workers' compensation prescriptions were significantly higher than for cash prescriptions. These costs are probably much more than most employers, workers' compensation payers, and pharmacy managers would expect. It is recommended that pharmacy managers should estimate their own costs and compare these costs to actual reimbursement when considering the reasonableness of workers' compensation prescriptions and whether to accept these prescriptions.
UTZINGER, J.; RASO, G.; BROOKER, S.; DE SAVIGNY, D.; TANNER, M.; ØRNBJERG, N.; SINGER, B. H.; N’GORAN, E. K.
2009-01-01
SUMMARY In May 2001, the World Health Assembly (WHA) passed a resolution which urged member states to attain, by 2010, a minimum target of regularly administering anthelminthic drugs to at least 75% and up to 100% of all school-aged children at risk of morbidity. The refined global strategy for the prevention and control of schistosomiasis and soil-transmitted helminthiasis was issued in the following year and large-scale administration of anthelminthic drugs endorsed as the central feature. This strategy has subsequently been termed ‘preventive chemotherapy’. Clearly, the 2001 WHA resolution led the way for concurrently controlling multiple neglected tropical diseases. In this paper, we recall the schistosomiasis situation in Africa in mid-2003. Adhering to strategic guidelines issued by the World Health Organization, we estimate the projected annual treatment needs with praziquantel among the school-aged population and critically discuss these estimates. The important role of geospatial tools for disease risk mapping, surveillance and predictions for resource allocation is emphasised. We clarify that schistosomiasis is only one of many neglected tropical diseases and that considerable uncertainties remain regarding global burden estimates. We examine new control initiatives targeting schistosomiasis and other tropical diseases that are often neglected. The prospect and challenges of integrated control are discussed and the need for combining biomedical, educational and engineering strategies and geospatial tools for sustainable disease control are highlighted. We conclude that, for achieving integrated and sustainable control of neglected tropical diseases, a set of interventions must be tailored to a given endemic setting and fine-tuned over time in response to the changing nature and impact of control. Consequently, besides the environment, the prevailing demographic, health and social systems contexts need to be considered. PMID:19906318